To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers languages spoken in Europe, focusing primarily on the Indo-European languages. The internal classification of the Indo-European family is provided and the branches of the family that are represented in Europe are covered in detail. The discovery of the Indo-European family in the late 1700s and the continuing controversy surrounding the “homeland” of the Indo-European languages are discussed. A separate section is dedicated to non-Indo-European languages of Europe, with special focus on Basque, an isolate language. The final section delves into endangered languages of Europe.
We discuss tough, extraposition, and cleft constructions. These constructions, like wh-interrogatives and wh-relative clauses, involve a kind of long-distance dependency but behave differently in many respects. The chapter explores these differences and offers a nonderivational construction-based analysis. We distinguish three cleft constructions of English based on their information-structure properties.
The properties of raising and control verbs that we discuss in this chapter can be summarized as follows. Unlike a control predicate, a raising predicate does not assign a semantic role to its subject (or object). The absence of a semantic role can be used to account for the possibility of expletive it or there, or a part of an idiom, as subject or object of a raising predicate, and the impossibility of such expressions as subjects of control predicates. Among control predicates, the VP complement’s unexpressed subject is coindexed with one of the syntactic dependents. Among raising predicates, the entire syntactic-semantic value of the subject of the infinitival VP is shared with that of one of the dependents of the predicate. This ensures that whatever category is required by the raising predicate’s VP complement is the raising predicate’s subject (or object). These properties of the raising and control verbs follow naturally from their lexical specifications. In particular, the present analysis offers a systematic, construction-based account of the mismatch between the number of syntactic complements that a verb has and the number of semantic arguments that it has.
This chapter explores two kinds of intra-clausal relations: grammatical functions and semantic roles. Each such relation allows us to describe the dependencies that exist between a predicator and the units that it combines with to make phrases of various kinds. The grammatical functions discussed include subject, (direct and indirect) object, predicative complement, oblique complement, and modifier. The chapter explores diagnostics used to identify each of these grammatical functions in a sentence. For instance, tag questions, agreement, and subject-auxiliary inversion can tell us if a given constituent is a subject or not. We note here that a key to understanding the syntax of English is the recognition that the mapping between form (categorial type) and function is not one-to-one; mismatches, as when a clause or even a PP serves as a subject, are possible. The chapter describes cases in which a given grammatical function can have various categorical realizations. We see that semantic roles (or participant roles) are combined in the manner they are because they reflect what kind of event, state, or relation the sentence depicts. One cannot, for example, have an event of transfer without a donor, gift, and recipient. The chapter gives examples of semantic roles like agent, theme, patient, location, source, and goal. We observe that, although there are instances in which it is difficult to diagnose an argument’s semantic role, semantic roles can be of use in classifying verbs into distinct subclasses.
The book focuses primarily on the descriptive facts of English syntax, presented through a ‘lexical lens’ that encourages students to recognize the important contribution that words and word classes make to syntactic structure. It then proceeds with the basic theoretical concepts of declarative grammar (in the framework of Sign-Based Construction Grammar), providing sample sentences.
This chapter discusses key grammatical properties of three major classes of nouns in English: common nouns, pronouns, and proper nouns. We demonstrate that the lexical properties of these nouns determine their external syntactic structures. We then examine three types of agreement relationships in English: noun-determiner, pronoun-antecedent, and subject-verb agreement. We observe that the agreement relationship between a noun and its determiner concerns number (NUM) features of the two, while that between a pronoun and its antecedent involves all the three morphosyntactic agreement (AGR) features: person (PER), number (NUM), and gender (GEND). For its part, the subject-verb agreement relationship depends not only on morphosyntactic agreement (AGR) features but also on the semantic index (IND) feature. This hybrid agreement framework offers us a streamlined analysis of mismatches that involve the respective NUM values of subject and verb. The analysis developed here is extended to partitive NPs in English.
In this chapter, we show that the well-formedness of each phrase depends on its internal and external syntax. The pivotal daughter element is the head, which determines what expressions may accompany it as its syntactic sisters. We observe that a grammar with simple phrase-structure rules raises two important issues: endocentricity (headedness) of a phrase and redundancies in the lexicon. To resolve these two issues, generative grammar has introduced X’ rules, including three key combinatorial rules: head-complement(s), head-specifier, and head-modifier. These rules ensure that each phrase is a projection of a head expression, while recognizing the existence of intermediate phrases (X’ phrases). X’ syntax captures the similarities between NPs and Ss by treating these phrase types in a uniform way. The grammar we adopt in this book (SBCG) follows this direction by using a fine-grained feature system to describe the syntactic and semantic properties of both simple and complex signs. This chapter introduces the basic feature system that we will use in describing the English language. We also examine the patterns of semantic-role expression called argument-structure patterns.
Adopting the same mechanisms that we used for the analysis of wh-interrogatives, this chapter offers a declarative, feature-based analysis of a range of relative clauses in English, including subject wh-relatives, nonsubject wh-relatives, that-relatives, infinitival relatives, and bare relatives.
This chapter discusses head features and the Head Feature Principle. It shows how elements on the argument-structure list are mapped onto the syntactic valence features SPR (specifier and subject) and COMPS, in accordance with the Argument Realization Constraint. Equipped with these principles, construction rules and feature structures, we show how each of the X’ construction rules interacts with lexical entries, as well as general principles like the HFP and the Valence Principle, to form licit lexical and phrasal constructs in English. Each combination must conform to all the principles as well as a combinatorial construction rule. We extend this system to license non-phrasal, lexical constructions by means of the HEAD-LEX CONSTRUCTION. In the final section, we ask why the members of the ARG-ST list need detailed feature specifications. We observe that there are a variety of syntactic environments in which the complement of a lexical expression must have a specific VFORM or PFORM value. We also note environments in which the subject must have a specified NFORM value.
This chapter focuses on the syntax of wh-question patterns that have been referred to as long-distance or unbounded dependency constructions. Starting with core dependency properties of wh-constructions, this chapter reviews the main problems that movement approaches encounter when attempting to represent the link between the filler wh-phrase and its corresponding gap. It develops a declarative, feature-based analysis to capture the linkage between the filler and the gap, while resolving problems originating from movement analyses. The key mechanisms of analysis are the ARC (Argument Realization Constraint), which allows any argument to be realized as a GAP element, the HEAD-FILLER CONSTRUCTION, which licenses the combination of a filler and an incomplete sentence with a nonempty GAP value, and the NIP (Nonlocal Inheritance Principle), which regulates nonlocal features like GAP in relevant mother phrases. The interplay of these construction-based mechanisms allows us to license a wide variety of wh-constructions: main-clause nonsubject wh-questions, subject wh-questions, wh-indirect questions, non-wh indirect questions, infinitival indirect questions, and even adjunct wh-questions.
This chapter discusses a fundamental assumption of Sign-Based Construction Grammar, the framework adopted in this book: language is an infinite set of signs, including lexical and phrasal signs. Lexical entries license lexemes, and constructions license constructs – phrases that consist of a mother plus one or more (phrasal or lexical) daughter nodes. Lexemes belong to syntactic categories like noun, verb, and preposition. We diagnose these categories based on combinatory requirements of the words in question. But we could not properly analyze English clauses and sentences if we viewed them as simply strings of syntactic categories. Instead, sentences in English have a kind of hierarchical structure called constituent structure, as indicated by phenomena like subject-verb agreement. A constituent is a series of words that behaves like an indivisible unit for certain syntactic purposes, e.g., serving as the ‘clefted’ constituent in the it-cleft construction. Among the constituents we have discussed are NP (typically a determiner followed by a nominal expression) and VP (a verb optionally followed by a NP, AP or PP).
This chapter aims to address four key issues in the study of the English auxiliary system. The issues involve the properties that distinguish auxiliary verbs from main verbs, ordering restrictions among auxiliary verbs, combinatorial restrictions on the syntactic complements of auxiliary verbs, and auxiliary-sensitive phenomena like negation. The chapter first focuses on the morphosyntactic properties of English auxiliary verbs. We show that their distributional, ordering, and combinatorial properties all follow from their lexical groupings: modals, have/be, do, and to. The second part of this chapter concerns the so-called NICE phenomena, each of which is sensitive to the presence of an auxiliary verb and has been extensively analyzed in generative grammar. The chapter shows us that a construction-based analysis can offer a straightforward analysis of these phenomena, without reliance on movement operations or functional projections.
This chapter explores the nature of syntactic competence -- what it means to ‘know’ a language. It asks how generative grammar has been used to model competence. After discussing the difference between prescriptive and descriptive rules, we describe procedures for discovering descriptively adequate rules. We distinguish inductive from deductive grammars, the latter of which are associated with classical transformational models of grammar, according to which human languages have all and only those (structural) properties that are expressible in the transformational formalism and the former of which are associated with constraint-based views of grammar, according to which an expression is syntactically well-formed if its form is paired with its meaning as an instance of some grammatical construction. It is this construction-based view of grammar that we adopt in this book. Construction Grammar offers an enriched model of grammatical competence, which attempts to capture all of the linguistic routines that an adult native speaker knows. In Construction Grammar, grammar represents an array of form-meaning-function groupings of varying degrees of productivity and internal complexity.