To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The theoretical framework used by most researchers of child language development is Chomsky's theory of generative grammar. The theory has changed considerably in recent years. The older model, which is still often used in child language research, is called Principles and Parameters Theory, while the newest version is known as Minimalism (cf. Chomsky 1981, 1986, 1995, 2000). Within this framework, it is commonly assumed that children are born with an innate universal grammar consisting of principles and parameters that define the space within which the grammars of individual languages may vary. Grammatical development is seen as a process whereby the parameters of universal grammar are set to a language-specific value by linguistic triggers in the input.
The theoretical framework used in this study is very different; it is based on recent work in functional and cognitive linguistics. The functional–cognitive approach subsumes a variety of related frameworks (cf. Croft 1995; Newmeyer 1998). Two of them are especially important to the current investigation: construction grammar, and the usage-based model. Construction grammar subsumes a family of grammatical theories in which constructions are considered the basic units of grammar (cf. Fillmore, Kay, and O'Connor 1988; Lakoff 1987; Langacker 1987a; Fillmore and Kay 1993; Goldberg 1995; Croft 2001); and the usage-based model comprises various network models in which linguistic knowledge is shaped by language use (cf. Bybee 1985, 1995, 2001; Langacker 1987a, 1991; Barlow and Kemmer 2000; Elman, Bates, Johnson, Karmiloff-Smith, Parisi, and Plunkett 1996).
Like complement clauses, relative clauses emerge from simple nonembedded sentences. The earliest relative clauses occur in presentational constructions that consist of a copular clause and a relative clause including an intransitive verb. Although these constructions are biclausal they denote only a single situation. The presentational copular clause does not serve as an independent assertion; rather, it functions to establish a referent in focus position, making it available for the predication expressed in the relative clause. The whole sentence thus contains only a single proposition, leading children frequently to conflate the two clauses: many of the early relative constructions are syntactic blends (or amalgams) in which the relative clause and the matrix clause are merged into a single syntactic unit. As children grow older, they begin to use more complex relative constructions. In contrast to the early presentational relatives, the relative constructions produced by older children denote two situations in two full-fledged clauses. Based on these data I argue that complex sentences including relative clauses develop via clause expansion: starting from presentational relatives that denote a single situation, children gradually learn the use of complex relative constructions in which two situations are expressed by two separate full clauses.
The bulk of the literature on the acquisition of complex sentences has been concerned with children's comprehension of multiple-clause structures in experiments. Almost all of these studies found that children have great difficulties in understanding complex sentences until well into the school years. For instance, Chomsky (1969) reported that 5- to 9-year-olds often misinterpret certain types of nonfinite complement clauses, and Sheldon (1974) and Tavakolian (1977) observed that relative clauses create tremendous difficulties at least until the early school years. Similarly, Piaget (1948) reported that children as old as 7 years tend to confuse cause and effect in causal clauses, and Clark (1971) found that 3- to 5-year-olds have difficulties comprehending temporal clauses marked by after and before. Many of these studies have argued that children's comprehension of complex sentences involves an interpretation strategy, such as the conjoined clause analysis or the order-of-mention principle, which seems to suggest that children learn very little about complex sentences during the preschool years.
Although young children have great difficulties in comprehending complex sentences in experiments, they use them at a very early age. As we have seen throughout this book, children begin to produce a wide variety of complex sentences during the preschool years. The earliest complex sentences emerge around the second birthday. They include the complement-taking verb wanna and a bare infinitive. Shortly thereafter, children begin to combine clauses by and.
In this study complex sentences are defined as grammatical constructions that express a specific relationship between two (or more) situations in two (or more) clauses. The definition involves three important notions: (i) construction, (ii) situation, and (iii) clause. The notion of construction has been discussed in detail in the previous chapter; here I concentrate on the two other notions, situation and clause. My definition of these terms is based on Langacker's notions of process, which I call situation, and relational predicate (cf. Langacker 1991:chs. 5–7).
A situation is a conceptual unit that has two important properties: situations are relational and temporal. They can be seen as conceptualized scenes involving a set of entities that are arranged in a specific constellation or engaged in an activity. Situations can be divided into several types: (i) situations that are stative vs. situations that involve a change of state (e.g. The cat is sitting on the table vs. The cat is running); (ii) situations that are conceptually bound vs. situations that are conceptually unbound (e.g. The ball hit the wall vs. The ball is rolling); and (iii) situations that are punctual vs. situations that are temporally extended (e.g. He recognized the mistake vs. He learned how to play the guitar) (cf. Vendler 1967; Van Valin and LaPolla 1997). Situations must be distinguished from things. Things are nonrelational and atemporal. Like situations, they can be divided into several types, e.g. objects, persons, and places.
Hitherto, we have assumed a simple model of clause structure in which canonical clauses are CP+TP+VP structures. However, in §5.6 we suggested that it is necessary to ‘split’ TP into two different auxiliary-headed projections in sentences like He may be lying – namely a TP projection headed by the T constituent may and an AUXP projection headed by the AUX constituent be; and in §7.3 we suggested that it may be necessary to posit a further Asp (ect) head in clauses to house the preposed verb in quotative structures like ‘We hate syntax’, said the students. In this chapter, we go on to suggest that CPs, VPs and NPs should likewise be split into multiple projections – hence the title of the chapter. We begin by looking at arguments that the CP layer of clause structure should be split into a number of separate projections: Force Phrase, Topic Phrase, Focus Phrase and Finiteness Phrase. We then go on to explore the possibility of splitting verb phrases into two or more separate projections – an inner core headed by a lexical verb, and an outer shell headed by a light verb (with perhaps an additional projection between the two in transitive verb phrases). Finally we turn to look at evidence for a split projection analysis of NPs.
In this chapter, we take a look at the syntax of agreement. We begin by outlining the claim made by Chomsky in recent work that agreement involves a relation between a probe and a goal (though it should be noted that the term goal in this chapter is used in an entirely different way from the term goal which was used to denote the thematic role played by a particular kind of argument in relation to its predicate in §7.5). We look at the nature of agreement, and go on to show that nominative and null case-marking involve agreement with T. Finally, we explore the relationship between the [epp] feature carried by T and agreement, and look at the consequences of this for control infinitives on the one hand and raising infinitives on the other.
Agreement
In traditional grammars, finite auxiliaries are said to agree with their subjects. Since (within the framework used here) finite auxiliaries occupy the head T position of TP and their subjects are in spec-TP, in earlier work agreement was said to involve a specifier–head relationship (between T and its specifier). However, there are both theoretical and empirical reasons for doubting that agreement involves a spec–head relation. From a theoretical perspective (as we saw in §4.9), Minimalist considerations lead us to the conclusion that we should restrict the range of syntactic relations used in linguistic description, perhaps limiting it to the relation c-command created by merger.
In broad terms, this book is concerned with aspects of grammar. Grammar is traditionally subdivided into two different but interrelated areas of study – morphology and syntax. Morphology is the study of how words are formed out of smaller units (called morphemes), and so addresses questions such as ‘What are the component morphemes of a word like antidisestablishmentarianism, and what is the nature of the morphological operations by which they are combined together to form the overall word?’ Syntax is the study of the way in which phrases and sentences are structured out of words, and so addresses questions like ‘What is the structure of a sentence like What's the president doing? and what is the nature of the grammatical operations by which its component words are combined together to form the overall sentence structure?’ In this chapter, we begin (in §1.2) by taking a brief look at the approach to the study of syntax taken in traditional grammar: this also provides an opportunity to introduce some useful grammatical terminology. In the remainder of the chapter, we look at the approach to syntax adopted within the theory of Universal Grammar developed by Chomsky.
Traditional grammar
Within traditional grammar, the syntax of a language is described in terms of a taxonomy (i.e. classificatory list) of the range of different types of syntactic structures found in the language. The central assumption underpinning syntactic analysis in traditional grammar is that phrases and sentences are built up of a series of constituents (i.e.
In this chapter, we look at recent work by Chomsky suggesting that syntactic structure is built up in phases (with phases including CP and transitive vP). At the end of each phase, part of the syntactic structure already formed undergoes transfer to the phonological and semantic components, with the result that the relevant part of the structure is inaccessible to further syntactic operations from that point on. (An important point of detail to note is that since we are outlining Chomsky's ideas on phases here, we shall follow his assumptions about the structure of verb phrases and expletive structures.)
Phases
In §8.5, we outlined Chomsky's claim in recent work that all syntactic operations involve a relation between a probe P and a local goal G which is sufficiently ‘close’ to the probe (or, in the case of multiple agreement, a relation between a probe and more than one local goal). We noted Chomsky's (2001, p. 13) remark that ‘the P, G relation must be local’ in order ‘to minimise search’, because the Language Faculty can only hold a limited amount of structure in its ‘active memory’ (Chomsky 1999, p. 9). Accordingly, syntactic structures are built up one phase at a time. Chomsky suggests (1999, p. 9) that phases are ‘propositional’ in nature, and include CP and transitive vP (more specifically, vP with an external argument, which he denotes as v*P).