• Non ci sono risultati.

The Minimalist Program and SLA.

N/A
N/A
Protected

Academic year: 2021

Condividi "The Minimalist Program and SLA. "

Copied!
21
0
0

Testo completo

(1)

Chapter 3

The Minimalist Program and SLA.

3.0 Introduction

As the name implies, the Minimalist Program is not a theory but an exploratory program which is subject to conflicting and constantly developing directions (Chomsky 2000: 87). It has, in fact, undergone radical changes from its earliest version (Chomsky 1995)

1

drawing from ideas and concepts that had been already tackled both within the Principles and Parameters framework (P&P) and during seminars and lectures spanning from the second half of the 1980s to Chomsky’s most recent works (1999, 2000, 2001). It provides, then, an uninterrupted line of research started in the 1950s which has been trying to answer questions like:

1.What’s the nature of the (linguistic)

2

object that we are studying?

2.How well designed is such an object of investigation? (Chomsky 2002: 66).

In short, does language offer an optimal solution to interact with those systems that are internal to the mind but external to language, with the interfaces of which (visual input, auditory input, thought system and so on) it relates? This is a new line of research which has a deep bearing also in SLA. As an example, let’s consider how Chomsky (2002) explains the aims of the Minimalist Program. If we have a bad car, he says, regardless of its imperfections and shortcomings, we can nonetheless keep on trying to find the best theory possible to describe such a car, despite its malfunctioning parts. A completely different thing, instead, is to investigate the extent to which such a car provides the best possible design to cope with the systems into which it has to function: that is, is it well designed to be driven on asphalt? Does it need fuel that can be easily provided to work properly?

And so on.

If we substitute the word ‘car’ for the word ‘language’ the same questions arise:

do we have to find the best theory which may account for all the imperfections (e.g. non-target-like forms) that characterise it or, conversely, do we have to

1

Henceforth MP. All other references to the Minimalist Program as a whole, including all its developments from the earlier version to more recent ones, will be indicated as MP in roman type.

2

Words in parenthesis and italics are mine.

(2)

understand if it provides the best possible solution to interact with the thought, visual, auditory system the learner is endowed with? These are precisely the tasks the Minimalist Program has set itself to investigate.

To sum up, the main concern of the Minimalist Program is not so much to provide the best theory possible to explain how language works (even though such an inquiry is not discarded) but to investigate if the language system itself has an optimal design (Chomsky 2000: 93)

Basically, it has developed from two key principles which had been already introduced in previous models of syntax, that is ‘The Full Interpretation

Principle’ (FI) and the ‘Principle of Economy’. The former principle claims that to be licensed, the structure of a sentence does not have to contain surplus

elements; for example, in phonology, if a symbol in a sentence is without a sensori-motor interpretation, the representation does not qualify at PF

(Phonological Form) interface (Chomsky, 1995: 27). The latter principle, which extends that of FI, claims that the computation process employed to derive representations must be guided, ‘as a last resort’ (Chomsky,1995: 27) by principles of economy, otherwise the representation fails. This means that in the MP all the unnecessary levels that characterised previous models of syntax are eliminated and only the necessary ones are left. Thus, while in P&P framework there were four levels of representation, D-Structure, S-Structure, PF and LF (Logical Form), in MP only the last two are left.

3

According to Chomsky, in fact, they are the only levels absolutely necessary to the linguistic system; one (PF) related to sounds, which interfaces language to the physical world, the other related to meaning (LF), which interfaces it to the cognitive system. Figures 1a and 1b below show the passage from the previous T model to the model elaborated in the MP:

3

In the Minimalist Program both d-structure and s-structure are eliminated because redundant. The Full Interpretation Principle, in fact, claims that surplus elements, if not cancelled, crush

derivations. See (Chomsky, 1995: 167-217).

(3)

Lexicon Lexicon

│ D-structure

S-structure PF- [Phonetic Component]

[Semantic [Phonetic LF Component] Component] [Semantic

Component]

Fig.1a T model Fig.1b Minimalist Model

(Source: Cook and Newson, 1996: 312-313)

Unlike fig. 1a, fig.2 b has both the phonological and the semantic component directly interfaced with the lexicon, which gathers all the ‘idiosyncratic’

properties of lexical items (Chomsky, 1995: 28), without any other level occurring between them. Drawing items from the lexicon, the model envisaged in fig. 1b generates representations at both LF and PF interface through a Computational System (C

HL

– Computational System for Human Language), which allows the learner to build up his/her language structures:

S

0

determines the set {F} of properties (‘features’) available for languages. Each L makes a one time selection of a subset [F] of {F} and a one-time assembly of elements of [F] as its lexicon LEX, which we can take to be a classical ‘list of exceptions’, putting aside further issues (Chomsky, 2001: 4)

Chomsky (1995: 28) also maintains that the Computational system is uniform for all languages; what changes are the idiosyncratic features of lexical items. As pointed out above, the Program has undergone various revisions and amendments over the years. The earlier version (Chomsky, 1995) basically consisted of four papers written at different times and collected in the same volume:

4

‘The Theories of Principles and Parameters’ (with Howard Lasnik, 1993); ‘Some Notes on Economy of Derivation and Representation (1991); ‘A Minimalist Program for Linguistic Theory’ (1993) and ‘Categories and Transformation’ (1995).

Successive papers added new insights to previous findings, in particular

4

When page number is required, I will refer to MP as (Chomsky, 1995: page number), without

distinguishing individual chapters.

(4)

(Chomsky, 1998, 1999, 2002), to get the current MP framework as it has been developed so far. But, I want to stress this point once again, work is in progress and what today appears updated theory may seem old-fashioned tomorrow.

3.1 Computational Operations and Derivations

Since this research project deals with the acquisition of functional categories in L2, I will neither delve into the technicalities of the Minimalist Program deeply nor discuss the various positions which have come up about specific issues of contention as well as the alternative solutions that have been suggested. I will give, instead, first a sketchy view of the computational process from the selection of lexical items to the final output at LF and PF representations and then will explore the internal properties of functional items as they are conceived in the MP.

5

As hinted above, in MP Chomsky describes the generative procedure of language (e.g. the I-language) as consisting of two components: a computational system and the lexicon. The former generates structural descriptions (henceforth SD), while the latter characterises the lexical items that appear in the

computational system (Chomsky, 1995: 20). The SDs provide information about the properties of each linguistic expression, including its sound and meaning, through PF and LF, which specify the aspect of SDs ‘insofar as they are linguistically determined’ (Chomsky, 1995: 21).

The lexicon, then, feeds the C

HL

with the information it requires to start the computation, excluding, Chomsky explains, ‘whatever is predictable by principles of UG or properties of the language in question’ (Chomsky, 1995: 6) Derivations start with an operation called SELECT which picks up syntactic objects (e.g.

lexical items) from the lexicon and collects them together in a subset of the latter called Numeration (N), which consists of a sequence of symbolic elements ( σ

1

, σ

2

, …, σ

n

) which have certain features; the sequence terminates only if σ

n

is a pair of phonological and logical forms. In the MP, Chomsky makes a distinction between interpretable and uninterpretable features: the former have a semantic interpretation, as for example φ-features on nominals (e.g. person, number and

5

For a more detailed description of the recent architecture of the Minimalist Program, see

(Hornstein, 2001; Epstein, 2002; Radford, 2004).

(5)

gender), categorial features like [N] or [V] or features on Tense like [past]. These play a fundamental role in determining the meaning of a word. Uninterpretable features, instead, like Case on a noun or φ-features on a verb, have no way of being interpreted (Chomsky, 1995) and thus they play a merely syntactic role, functioning as instructions for movements during derivations. Having no semantic role, uninterpretable features have to be checked against interpretable ones and be deleted at LF, otherwise the derivation crashes.

Interpretability of features is expressed in binary terms: that is to say, a syntactic object either has or doesn’t have a specific feature. In the 1999 framework, however, Chomsky corrected such an assumption by proposing that

uninterpretable features enter the derivation without a specific value. Such an amendment was made necessary to correct what he called ‘imperfections’

(Chomsky, 1999) concerning interpretability at LF envisaged in the previous model. Interpretability, in fact, is a semantic notion and thus the syntactic system, which forms a separate module, has no way of distinguishing between

interpretable and uninterpretable features, having no access to the semantic system. The consequence is that, being incapable of determining which features are uninterpretable, it can not delete it, and as a result the derivation crashes.

Chomsky’s solution to this problem was to introduce a valued vs unvalued system of features: interpretable features enter the derivation valued, while

uninterpretable features enter the derivation unvalued and are valued against the former during the computation. Once an uninterpretable feature has been valued and found matching against an interpretable one, it can be deleted and the derivation converges. Here, however, a problem arises: if during Spell-out all features are valued, how can the C

HL

distinguish between valued and unvalued features and thus determine those which have to be deleted? Again, Chomsky’s solution was to introduce ‘phase level memory’ (Chomsky, 2001), according to which operations apply freely and at the same time within a phase. The

computational system is able to store and evaluate the various operations which

take place in each phase. Once a phase has been completed through an operation

called TRANSFER the derivation is submitted to PF and LF, where the features

valued at LF are eliminated and those at PF maintained. I will show how all this

(6)

D

C K

B A

the the star

works through a practical example in a while. Another fundamental departure from the previous P&P model in the MP is that in the latter Chomsky abandons X-bar theory and replaces it with an operation called MERGE, which takes items from the lexicon and combines them together. By merging two syntactic objects, A and B, a new, more complex object K is obtained, as shown in (1)

(1)

This new object can be further merged with another syntactic object, for example C to get a new object D (2)

(2)

The combination of two elements thus forms a binary set in which one of the elements is the label of the set and the other the set of the combination of the two elements. When MERGE combines two syntactic objects, one projects its features and the other does not. Chomsky maintains that it is the head that determines the label (Chomsky, 1995b: 398). A syntactic object that projects no further is called maximal projection while an object that does not project is called minimal projection. In the latter case, if an object does not project, it can be a head (since it is a lexical item) and a maximal projection at the same time (because it can not project further). Chomsky calls this new model of representation ‘bare phrase structure’ (Chomsky, 1995b) which is characterised by the fact that projection levels are dispensed with. For example, substituting the letters in (1) above with lexical items in (3) we get:

(3)

K

A B

(7)

Where the node [star] is both a head and a maximal projection because it can not project further.

Beyond the operation MERGE, the other fundamental operation is AGREE which, as pointed out above, is responsible for matching the features of two syntactic objects, constraining the set of possible combinations. The Agreement relations between two objects obtains through probes and goals: the former are functional heads such as, for instance, T(ense), the latter are referential

expressions and expletive nominals which are within their c-command domain and which probes have to agree with. Chomsky maintains that a head can have an agreement relation with

a matching goal only if the latter has its features unvalued (e.g. it is active). If, on the contrary, the goal has its features valued the agreement relation fails.

6

For example, in the case of a phrase like (4) below:

(4) ‘Babies cry’

The syntactic object babies enters the derivation with a number of valued interpretable φ- features [3

rd

Person], [Plural] and also with an unvalued uninterpretable Case feature [uCase], where the notation u- stands for uninterpretable (Pesetsky and Torrego, 2001).

The verb cry, in turn, enters the derivation with a set of unvalued uninterpretable φ- features [uPerson], [uPlural] and with a valued interpretable Case feature [Case]. The verb unvalued features match with the noun valued ones and the unvalued noun feature matches with the verb valued one. As a result, unvalued features can be deleted and the derivation converges, as shown in (5).

(5) ‘Babies cry’

[N]; [Person], [Number]; [uCase] [V]; [uPerson], [u Number]; [Case]

6

According to the defective intervention effect a probe cannot try a matching relation with a goal

which is lower down in the tree

(8)

In MP, lexical items (LI) are assigned an index (i) which signals the number of times it is accessed. This number has to be reduced to zero because, according to the principle of Full Interpretation, the derivation crashes if unused objects remain in the numeration. This means that if a word in the numeration has index 1 it must be accessed only once and then it is introduced into the derivation;

conversely, if a word has index 2, it must be accessed twice. In the latter case, the first occurrence of the lexical item (LI

1

) will reveal different properties from the second occurrence (LI

2

) at LF and is processed differently by the C

HL

.

To show how a derivation is built up concretely, consider a simple sentence like I love flowers with the caveat, however, that the example I give is very sketchy and only operations specifically pertaining to functional items, such as feature checking and agreement, have been highlighted.

7

Following Radford (2004: 184), let’s assume that the sentence I love flowers is a CP headed by a null declarative complement ø, containing a TP complement headed by T, which carries both a valued feature (e.g. [Pres-Tense]) and unvalued φ-features (e.g. [u-Pers], [u-Num]) and Case feature (e.g. [u-Case]).

Once such a syntactic structure has been built up, it is spelled out by the PF component in a bottom-up, cyclic fashion and the derivation proceeds in the following way:

1.The verb love merges with the noun flowers and forms the VP love flowers. The noun carries with it a bundle of features, either valued, such as φ-features of person [3-Pers] and number [pl-Num], or unvalued, such as case [u-Case].

7

For a detailed overview of how derivations are built up in a minimalist fashion, see (Radford, 2004; Hornstein, 2005). The example above is a highly simplified adaptation from the former.

V love [assign acc-Case]

N flowers [3-pers]

[Pl-Num]

[u-Case]

VP

(9)

2.The VP which results from the merging between love and flowers merges in turn with a null transitive light verb, which carries with it unvalued φ-features [u- Num], [u-Pers] and forms the v-bar below.

3.Being affixal in nature, the null light verb (marked ø) needs a host word to attach to it and thus allows the verb love to raise. This latter is transitive and thus it selects an agent, which it projects onto the specifier position of vP.

8

In this way the Pronoun [PRN] agent I enters the derivation with its bundle of features: [1- Pers], [Sing-Num], [u-Case]. At this point two options are available: 1.v’ can merge with the external argument; 2.v’ can probe the goal flowers to check it features. Given Merge over Move, v’ first merges with the external argument and then checks the features of flowers, as shown below:

8

Actually, Chomsky’s notation for transitive vP is v*P (Chosmky, 1999).

V love

DP flowers

[Pers]

[Num]

[u-Case]

VP v’

v ø [u-Num]

[u-Pers]

[assign

acc-

Case]

(10)

4. The checking relation between the SOs (Syntactic Objects) love and flowers is called AGREE, and through it the null light verb love probes the goal flowers, which has an unvalued Case feature [u-Case]. The goal (flowers), in turn, identifies and values the uninterpretable φ-features on the probe (love) and, having all his φ-features complete, COPIES them onto the null light verb features, and delete them. The transitive light verb does the same for the unvalued Case feature on flowers, valuating, copying and deleting it, as shown in 3.

9

5.According to the principle that all finite clauses are Tensed, with either an overt or a null T constituent, the vP clause above also merges with T. The latter carries its set of features, either valued or unvalued: [Pres-Tns], [u-Num], [u-Pers], [EPP]. Then, it probes and identifies the Pronoun I as the nearest goal which has a feature still unvalued [u-Case] and assigns nominative case to it. Conversely, the pronoun I values and deletes the unvalued person and number feature on T. In the PF component, the morphological features of T, by the operation Affix

Hopping, are also lowered to the verb love. In this case, however, the verb has zero affix (this operation is marked by dotted lines).

9

This happens because, as Radford reports (2004: 356), in recent works Chomsky has suggested that a transitive light verb having person and number features can function as a probe to assign accusative case to a goal with which it matches in person and number and which has an unvalued Case feature.

V love

DP flowers

[Pers]

[Num]

[u-Case]

VP v’

v ø + love [u-Num]

[u-Pers]

[assign acc- Case]

vP

PRN I [1-Pers]

[Sing- Num]

[u-Case]

(11)

6. The EPP structure of T, then raises the PRN from the specifier position of vP to the specifier position of TP, thus deleting it. The derivation which is obtained is:

V love

DP flowers

[Pers]

[Num]

[u-Case]

VP v’

v ø + love + o[AFFIX]

[u-Num]

[u-Pers]

O[+affix]

[assign acc-Case]

PRN I [1-Pers]

[Sing- Num]

[u-Case]

T [Pres-Tns]

[u-Pers]

[u-Num]

[EPP]

vP

T’

(12)

7.I love flowers is a declarative clause, and thus it has a declarative force which in this case is not overtly signalled by a lexical complementizer (e.g. that, whether). Despite this, the declarative force it possesses is present and notated above as the a null-constituent ø.

The final derivation of the phrase I love flowers appears like the one below:

V love

DP flowers

[Pers]

[Num]

[u-Case]

VP v’

v ø + love [u-Num]

[u-Pers]

vP

PRN I T

[Pres-Tns]

[u-Pers]

[u-Num]

[EPP]

T’

TP

I [1-Pers]

[Sing- Num]

[Nom-

Case]

(13)

3.2.Phases in the Minimalist Programme

A point worth illustrating is how derivations are computed in terms of cognitive process. Chomsky (1998: 107) maintains that derivations proceed phase by phase, a phase being an SO (Syntactic Object) derived by a choice of LA

i

(a subset of the Lexical Array). A phase can be a CP or a vP, but not a TP or a verbal phrase headed by a Head lacking ø-features (Chomsky, 1998:106). Phases are also

‘propositional’ and once one of them has been completed, it is transferred to the semantic and phonetic interfaces to be spelled out, thus remaining impenetrable to further computation due to the PIC principle (Phase-Impenetrability Condition):

V love

DP flowers

[Pers]

[Num]

[u-Case]

VP v’

v ø + love [u-Num]

[u-Pers]

[assign acc- Case]

vP

PRN I T

Pres-Tns]

[u-Pers]

[u-Num]

[EPP]

T’

TP

I [1-Pers]

[Sing- Num]

[Nom- Case]

C ø

CP

(14)

In a phase α with head H, the domain of H is not accessible to operations outside α, only H and its edge are accessible to such operations’ (Chomsky, 1998: 108).

The explanation for the need of a phase by phase process is derived by the recognition of the limited capacity of human memory which is unable to process much information simultaneously. Thus, locality conditions require that

movement between a probe and an active goal be as short as possible and chunked into phases (Chomsky, 2000: 108). To give an example of what this concretely means, let’s consider again the derivation I love flowers above. A phase, as pointed out above, is a CP or a vP; thus, after love and flowers have been merged and sent to PF and LF to be spelled out, no other object can enter the derivation (e.g. neither adjuncts nor modifiers can be added to the computation: love flowers #so much, love #field flowers). Derivations, in fact, make a one-time selection (Chomsky, 1998: 100) of the lexical array from the lexicon, which they map onto the expression to be computed without any further access to the lexicon itself. Such a view of the computation process, makes the distinction between overt (e.g. before spell out) or covert cycles (at LF) in MP collapse, since computation takes place simultaneously in a cyclic spell out fashion.

To sum up, Chomsky ( 2000:101) sees the procedure which generates a language L in the way reported below:

a.Select [F] from the universal features set F.

b.Select Lex, assembling features from [F].

c.Select LA from Lex.

d.Map LA to Exp, with no recourse to [F] for narrow syntax.

The operations which enter the C

HL

component after selection of LA are: Merge, which takes two SOs ( α, β) to form one (K (α, β)); Agree, which sets up a relation between an LI ( α) and a feature F in an other item within its domain and Move, which combines Merge and Agree.

In terms of learnability, such a procedure clearly makes the burden of acquisition

shift from syntax to the lexicon, which drives both the selection of LI and the

operations in the C

HL

:

(15)

Possibly, as proposed by Hagit Borer, the parameters are actually restricted to the lexicon, which would mean that the rest of the I-language is fixed and invariant, a far- reaching idea that has proven quite productive” (Chomsky, 1991:23).

3.3.Lexical and Functional Categories

From what I have illustrated above, what properties, one wonders, do lexical items contain to be able to provide all the (optimal) information the C

HL

needs to trigger off the various operations (Merge, Agree, Move) that start derivations? In MP Chomsky divides lexical items into two categories: lexical items proper, which have ‘substantive’ content (e.g. content words) and functional items, which have no ‘substantive content’ (Chomsky, 1995: 54). The former are the nouns, verbs, adjectives and prepositions which provide the “atomic elements” from which, through various operations, the interface levels LF and PF are built. They also head the NP, AP, VP and PP phrases they project. Each of these atomic elements or “primes”, Chomsky (1995: 34) assumes, is a feature complex which can be symbolically represented in the following way:

a.N= [+N, -V]

b.V= [-N. +V]

c.A= [+N, +V]

d.P= [-N, -V]

From the scheme above, it appears clear that words are divided, basically, into two broad categories: Noun and Verbs; all the others are defined according to whether they share properties belonging to either or both of them. Thus a Noun has nominal properties but not verbal ones, while verbs have verbal properties but not nominal ones. Adjectives, in turn, share some of the properties typical of nouns and verbs; they can, for example, be attached to a prefix like un- and form a lexical item like unhappy in the same way as the prefix un- can be added to verbs and form an item like unsettle. Thus the prefixation of a morpheme like un- is a characteristic of verbs and adjectives but not of nouns or prepositions (e.g.

*unboy; *unthrough). With nouns, instead, adjectives can share inflectional

systems in languages which allow case marking. Prepositions, on the contrary, do

not share any of the features characterising nouns and verbs.

(16)

Apart from categorical features,

10

each of the elements illustrated above also contains a set of other features of various kinds: 2.syntactic (or formal) and semantic features and 3.phonological features. Formal and semantic features operate, respectively, c-selection of syntactic roles and s-selection of thematic roles (Chomsky, 1995: 54).

To go back to functional items, Chomsky (1995: 54) asserts that they are constituted by a bundle of features as well, but, he points out, they do not establish θ-marking relations. Each of them, instead, is parameterized and has specific selectional properties even though no semantic content. In the next chapter I will discuss how this view has been rejected in recent research work in SLA. Among the main functional categories are Complementizers (C), Tense (T), Light Verb (v), Determiners (D), Negation (Neg), etc. which carry specific grammatical meanings. Chomsky (1998) calls the first three of them core functional categories (CFCs) all being characterised by specific properties:

C(omplementiser) expresses force and mood, has φ-features and optional EPP features for wh-phrases; T(ense) reveals tense and events, has φ-features in terms of subject agreement, EPP features that are obligatory: finally v expresses transitivity (notated as v*), selects verbs, has both φ-features (in terms of object agreement) and optional EPP features when object shift takes place (e.g. second Merge). The categories above are extensively tackled in the next chapter. For the present purpose, the sketchy picture above is sufficient to show that functional categories have, like substantive categories, semantic and syntactic properties which they can project, thus contributing to the meaning of derivations. They differ, however, from substantive categories in that they form a closed class and are, in various ways, defective.

Among formal features, lexical items can also have intrinsic and optional features. Intrinsic features are either idiosyncratic or predictable from other

10

As regards the acquisition of syntactic categories, Chomsky writes: ‘Grimshaw (1981) argues that the acquisition of the syntactic category of a lexical item is based in part on the notion

“canonical structural realization (CSR)”. The CSR of a physical object is N, that of an action is V and so on. In the absence of evidence, the child will assume that a word belongs to its CSR – that, say, a word referring to an action is a verb. As Grimshaw indicates, while such “semantic

bootstrapping” might constitute part of the acquisition procedure, the resulting steady-state lexicon

has no such requirement. Languages commonly have nouns, like destruction, referring to actions

(as well as verbs, like be, that don’t refer to actions’ (Chomsky, 1995: 32).

(17)

(semantic) properties of the lexical item and are listed explicitly in the lexical entry. Optional features, instead, are added to the lexical item as it enters the NUMERATION (Chomsky, 1995: 231). They are called optional, Chomsky explains, because nothing intrinsic to a lexical item can tell us if a particular occurrence of it is plural or singular, nominative or accusative (even though in some cases [number] can be idiosyncratic: e.g. in the words scissors, for

instance) (Chomsky, 1995: 236). Just to give an example, intrinsic features for the word flowers can be: the categorial [+N], person [3 person] and gender [-human]

feature, while those for a verb like love can be the categorial [+V] and the Case [assign accusative] feature. As regards optional features, an example of them for the word flowers can be the number feature [+plural], while for the verb love the Tense (e.g. past, present, etc.) feature. What this means is that if the word flowers is selected in the numeration, it must include the intrinsic categorial feature [N], the gender feature [-human] and the person feature [3 person]. It can also include, of course, optional features like the number feature [+plural] and the case feature [accusative]. Another key concept that Chomsky introduces in the early MP and that has some bearing on the acquisition process, is that the lexicon provides an ‘optimal coding’

11

for words’ idiosyncrasies (Chomsky, 1995: 235).

That is to say, any word has a set of properties, some of which are idiosyncratic and some of which belong to general principles, either of UG or of a specific language system. The optimal coding for the lexical entry of a word specifies the former properties and abstracts from the latter. It is this optimal coding that permits the construction of derivations at both the PF and LF interface without crashing them. In fact, it must satisfy certain ‘natural economy conditions’ and avoid ‘superfluous’ steps otherwise the derivation is blocked.

The language L thus generates three relevant sets of computations: the set D of derivations, a subset Dc of convergent derivations of D, and a subset D

A

of admissible derivations of D. FI determines D

C

and the economy conditions select D

A

. (Chomsky, 1995: 220)

11

‘We want the initial array A, whether a numeration or something else, not only to express the compatibility relation between π and λ, but also to fix the reference set for determining whether a derivation from A to (π, λ) is optimal – that is, not blocked by a more economical derivation.

Derivation of the reference set is a delicate problem, as are considerations of economy generally’

(Chomsky, 1995: 227).

(18)

Once again, consider the lexical entry for the word flowers. It lists the categorial feature noun [+N] but not the Case feature, since the latter is a general principle of UG which can be derived from its belonging to the category [+N]. In the case of verbs, instead, the lexical entry for love will explicitly list the categorial feature [+V] but not Tense and ф-features, because these are already contained in [V].

Thus, to conclude, an optimal coding for a lexical entry will list its phonological matrix, its semantic representation and all the formal properties that are not predictable from other features.

A further crucial distinction Chomsky makes about lexical items is that between interpretable and uninterpretable features, already introduced above. Chomsky (1995), however, makes clear that there is just a lose relation between intrinsic and optional features on the one hand, and interpretable and uninterpretable ones on the other. For example, the optional feature [±plural] on nouns is interpretable (and for this reason it is not eliminated at LF), while the intrinsic Case feature on verbs is uninterpretable and thus eliminated at LF. In general, interpretable

features are all categorial features (e.g. [N], [V] and so on), φ-features of DPs and tense features on T (e.g. [past], [present]); all the others are uninterpretable.

To add to the picture so far illustrated, in MP Chomsky also holds that the features that attract more interest are those which refer to semantic selection and to the properties of lexical heads (e.g. verbs, nouns, adjectives and pre and postpositions). These properties specify the argument structure of the heads so as to determine how many arguments and semantic roles they may take. The lexical features of a verb like buy, for instance, must specify that it has assigned an agent role, a theme role and a goal role. The association between theta-roles and

argument positions, Chomsky claims, is to a large extent predictable (1995:54).

This means that the role ‘agent’ can not be assigned to a complement; as a

consequence, since this association is predictable, it doesn’t need to be inserted in the lexical entry of the verb give.

12

12

As regards the relationship between c-selection and s-selection Chomsky maintains that ‘the child is simultaneously presented with evidence bearing on both s-selection (given that sentences are presented in context and assuming that the relevant contexts can be determined) and c- selection. It is reasonable to assume that both aspects of the evidence contribute to the

development of the knowledge. The converse situation is also possible, with c-selection providing

(19)

A final remark on lexical items is needed to briefly illustrate how Chomsky copes with derivational and inflectional processes in the MP. He makes a

distinction between them by asserting that inflectional processes are internal to the lexicon while derivational processes involve computational operations like word formation and checking. The example he gives to demonstrate this point is the formation of the past tense of the verb walk. The root [Walk] is contained in the lexicon, with all its idiosyncratic properties (e.g. sound, meaning and form) and its inflectional feature [tense], one of which is past. Platzack (1996:370-371) schematically illustrates the bundle of features contained in words in the following way:

Word=( α, Infl

1

…. Infl

2

), where α= [R-morph

1

- … morph

n

]

Where α is a morphological complex, R is a root and morph

1

, morph

2

visible inflectional features. Infl

1

… Infl

2

instead are the abstract functional categories.

Following Platzack, if we match the features of the verb Walked to the configuration above we obtain the following representation:

V=(walked, φ-features

obj

, past

tns

, φ-features

subj

, finite) Where walked= (walk + ed

past

)

The underlying structure above shows that the set of expressed morphemes does not completely match with the underlying abstract features (Platzack 1996: 317).

The verb walked, in fact, has two φ-features, one establishing its relations with the subject and the other with the object. More so, it also carries a finite feature which places it in a specific time and in a specific space.

Given the representation above, a computational rule, which Chomsky (1995:20) calls R, either associates [walk] to [tense] or the other way round [tense] to [walk]. At this point two possibilities arise: either [walk] is selected from the lexicon as it is and then matched with a [past] feature through rule R or it is formed through redundancy rules, with the properties [walked] and [past]

information about meanings of the verbs. For example, exposure to a sentence containing a

clausal complement to an unfamiliar verb would lead the learner to hypothesise that the verb is one

of propositional attitude’ (Chomsky, 1995: 31-32).

(20)

already specified (Chomsky, 1995: 20) in the lexicon. The verb so formed is checked by being moved to the right position, in this case T and there licensed. If there weren’t a suitable T position the derivation would crash.

13

Conclusions

To conclude, in the MP the lexicon drives the computational process by providing all the information the C

HL

needs. It contains a ‘list of ‘“exceptions”’

(Chomsky, 1995:235) which do not belong to general principles, neither of UG nor of a specific language. In terms of acquisition this means that these

exceptions are in a sense parameterized, because words contain bundles of features at various levels – phonological, syntactic, semantic – that merge, agree and move thus generating derivations:

Acquiring a language involves at least selection of the features [F], construction of lexical items Lex, and refinement of CHL in one of the possible ways – parameter setting. (Chomsky, 2000: 100)

In the light of this, it appears clear that much of the burden of language acquisition has shifted from syntax to lexicon, each lexical item having a set of features which vary from language to language; what remains uniform in all languages is the computational system. Yet, despite such a promising new perspective, crucial questions still need an answer. How do language learners, for example, access all the properties (features) contained in functional categories, given that the input underdetermines the information they need? The Program, in any case, is still in progress and future contributions will certainly provide answers to issues as yet unexplored. As regards SLA, the picture is even more

13

Hawkins (2001: 341) notices that a consequence of such an operation is that Affix-Hopping is no longer needed. If the verb Walk, he argues, picks up the past-tense suffix -ed in the lexicon, then there is not possibility for the latter of being lowered from its position in T to VP to be added to the verb. However, the past-tense feature -ed has to be checked in some way. Hence Hawkins posits that the difference between Italian, a language in which lexical verbs raise, and English a language in which they don’t, is that in the former lexical verbs are checked in T while in the latter in VP. But this, I argue, implies that inflection itself lowers and to me, this does not seem

plausible.

(21)

complex since lots of hypotheses have been made especially as regards the

acquisition of functional categories without producing a unitary framework.

Riferimenti

Documenti correlati

Affronteremo nel capitolo due le problematicità della definizione e misurazione del fenomeno, il quale può essere compreso nella sua interezza non solo sulla base della

RAKE detector (CRD) and pre-combining advanced blind adaptive multiuser detector (PABA-MUD) in the uplink GSM sub-urban channel with different number of asynchronous interfering

Methods: Between 1992–2006, 8 children with a mean age of 7.3 years (range 4–11 years old) diagnosed with malignant bone tumor of the distal radius underwent

We begin with perfusion (Q) scintigraphy, review the devel- opment of diagnostic systems that combine ventilation (V) scintigraphy and chest radiography with the Q scan, and describe

A distanza di tre anni dalla pubblicazione di quel testo si presentano in que- sta sede  documenti dell’instrumentum pertinenti a due aree della Sardhnía: da un lato due

In addition, the vibrational cross sections have been measured for all the absorptions in the spectral range between 450 and 2500 cm -1 , thus obtaining the integrated band

To make “something of a partner” out of him, instead of a “defensive adversary”, the President should persuade him that Washington, while negotiating the Vietnam issue, would

The carlsi Zone, here introduced, corresponds to the upper part of the eurekaensis Zone and to the lower part of the delta Zone (omoalpha Zone sensu Murphy & Valen-