To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The task in this book was to present a theory of syntax from the communication-andcognition perspective. As stated in section 1.4, the general skeleton of the theory is drawn from RRG, and many parts of the theory are elaborations on basic RRG concepts, e.g. the layered structure of the clause, semantic macroroles, potential focus domain, pragmatic pivots, juncture and nexus. But the content of many of the analyses integrate ideas from a variety of theories and individuals, e.g. Rijkhoff's theory of noun phrase structure from FG, the notion of constructional template adapted from ConG, Lambrecht's theory of information structure, Pustejovsky's theory of nominal qualia, the pragmatic analysis of pronominalization of Kuno, Bolinger and Bickerton, and Jackendoff's ideas about reflexivization, to name a few.
Of the issues raised in chapter 1, one of the most important issues, and for some linguists, the most important issue, is language acquisition. In section 1.3.2 we briefly mentioned work by a number of linguists, psycholinguists and psychologists on this topic from the communication-and-cognition perspective, and in this final section, we will look at the implications of the syntactic analyses we have presented for theoretical questions in acquisition and child language.
The first step is to clarify the foundational issue, namely, assumptions about the nature of the human cognitive endowment regarding language. Chomsky has always been very clear that for him the essential features of the grammars of human languages are part of a species-specific, genetically determined biological organ of language; indeed, he now claims (Chomsky 1995) that the basic syntax of all languages is the same and that all cross-linguistic variation is due to lexical differences.
In this chapter we will investigate how semantic representations and syntactic representations are linked in complex sentences. We will start from the syntactic representations developed in chapter 8 and from the linking algorithms in chapter 7. An important question to be investigated is the extent to which the linking algorithms proposed in chapter 7 for simple sentences must be modified to deal with complex sentences. We will proceed as follows. In section 9.1 we look at linking in the different juncture–nexus types discussed in chapter 8. This includes discussion of a number of issues that have been important in theoretical debates over the past three decades: control constructions (a.k.a. ‘equi-NP-deletion’), matrix-coding constructions (a.k.a. ‘raising to subject’, ‘raising to object’, ‘exceptional case-marking’) and causative constructions. We investigate case marking in complex constructions in section 9.2. The next section focuses on linking in complex NP constructions, primarily relative clause constructions. In section 9.4 we investigate reflexivization in complex constructions, and again the question arises as to the extent to which the principles proposed in section 7.5.2 will have to be modified to deal with these new phenomena. In section 9.5 we propose an account of the restrictions on so-called ‘long-distance dependencies’ involved in WH-question formation, topicalization and relativization. These restrictions, which fall under the principle known as ‘subjacency’ in the generative literature, are significant for linguistic theory, for theories of language acquisition and for related theories of cognitive organization (see section 1.3.1).
Every language has operations that adjust the relationship between semantic roles and grammatical relations in clauses. Such devices are sometimes referred to as alternative voices. For example, the passive operation in English when applied to most transitive verbs places the PATIENT in the subject role and the AGENT in an oblique role. The more normal arrangement for transitive verbs is for the AGENT to bear the subject relation and the PATIENT the object relation:
(1) a. Active
Orna baked these cookies. AGENT = subject
PATIENT = object
b. Passive
These cookies were baked by Orna. PATIENT = subject
AGENT = oblique
In this chapter we will discuss a range of structures that adjust the relationship between grammatical relations and semantic roles in terms of valence. Not all of these would be considered in traditional grammar under the heading of “voice,” but because of their functional similarity and because many languages treat them in structurally comparable ways, it is often convenient to group some or all of these operations together in a single chapter of a grammar or grammar sketch.
Valence can be thought of as a semantic notion, a syntactic notion, or a combination of the two. Semantic valence refers to the number of participants that must be “on stage” (see section 0.2.3) in the scene expressed by the verb. For example, the verb eat in English has a semantic valence of two, since for any given event of eating there must be at least an eater and an eaten thing. In terms of predicate calculus, the concept EAT is a relation between two variables, x and y, where x is a thing that eats and y is a thing that undergoes eating. This semantic relationship would be represented in predicate calculus notation as EAT(X, y) (see below).
Grammatical valence (or syntactic valence) refers to the number of arguments present in any given clause. A syntactic argument of a verb is a nominal element (including possibly zero, if this is a referential device in the language) that bears a grammatical relation to the verb (see chapter 7). So, for example, a given instance of the verb eat in English may have a syntactic valence of one or two.
In previous chapters we have discussed several means of altering the form of verbs and nouns to shape the semantic force of the concepts they express. In every language there exist as well different ways of combining basic lexical items, such as verbs, to form more complex expressions. In this chapter we will discuss several construction types that involve combinations of verbs.
Most of the multi-verb constructions described in this chapter involve one independent clause and one or more dependent clauses. An independent clause is one that is fully inflected and capable of being integrated into discourse on its own (see section 2.2 on inflectional morphology). A dependent clause is one that depends on some other clause for at least part of its inflectional information. For example, in the following example, clause (b) is dependent on clause (a) because the subject and tense of clause (b) are only understood via the subject and tense of clause (a):
(1) (a) He came in, (b) locking the door behind him.
Clause (b) by itself does not qualify as a fully inflected clause, able to be integrated into discourse on its own. Sometimes fully inflected verbs are called finite verbs, whereas dependent verbs are termed non-finite. However, this distinction must be understood as a continuum, as some verbs are dependent in one respect, but independent in another. Thus we may talk about one verb being more finite or less finite than another.
The present chapter will be organized according to six general types of multiple verb constructions: (1) serial verbs, (2) complement clauses, (3) adverbial clauses, (4) clause chains, (5) relative clauses, and (6) coordination. These six construction types are arranged in such a way that the earlier ones represent the highest degree of grammatical integration between two verbs, whereas the later ones represent the lowest degree of grammatical integration. Another way of describing this arrangement is in terms of a continuum in which one end is a single clause, and the other end is two grammatically distinct clauses. A given language may possess any number of construction types that fall somewhere in between these extremes.
Every language has clauses that express proper inclusion, equation, attribution, location, existence, and possession (defined below). Sometimes this “family” of constructions is collectively referred to as predicate nominals. However, in this book we will use this term in a more specific sense, reserving it for those clauses in which the semantic content of the predication is embodied in a noun. This definition distinguishes predicate nominals from similar constructions such as predicate adjectives, predicate locatives, and others. The following discussion will define this family of clause types using preliminary examples from English. Section 6.1 will describe each type in more detail, providing a typology of the various ways languages are known to form these clause types.
The following is an example of a predicate nominal clause in English:
(1) Frieda is a teacher.
In this construction the predicate is is a teacher, and the main semantic content of this predicate is embodied in the noun teacher. The verb is (a form of be) simply specifies the relationship between Frieda and teacher and carries the tense/aspect and person/number information required of independent predications in English. Sometimes the noun phrase a teacher is called “the predicate nominal” or even “the nominal predicate” of the clause. In this discussion, the term predicate nominal will normally refer to the entire clause.
Predicate adjectives are clauses in which the main semantic content is expressed by an adjective. If the language lacks a grammatical-category of adjective, there will be no grammatically distinct predicate adjective construction (see section 3.3.1 on how to identify adjectives as a grammatical category). Semantically, these clause types can be described as attributive clauses:
(2) John is tall.
My car is green.
Existential constructions predicate the existence of some entity, usually in some specified location:
Both text and elicited data are essential to good descriptive linguistics. They each have advantages and disadvantages. The linguistic researcher needs to be aware of these in order to make the best use of all the data available. Even as chopsticks are no good for eating soup and a spoon is awkward for eating spaghetti, so elicited and text data each have their own areas of usefulness. The linguistic researcher will be handicapped in conceptualizing a linguistic system if he/she attempts to use one type of data to accomplish a task best performed by the other type.
In the following paragraphs, I will first define and present some characteristics of text and elicited data. Then I will list the areas of linguistic analysis that each type of data is best suited to. Finally, I will suggest some ways in which text and elicited data might be managed in the course of a linguistic field program. This discussion is mostly directed to fieldworkers who are not working in their native language. However, many of the principles mentioned should also be helpful to mother-tongue linguistic researchers.
Al.l Definitions
Here I will use the word “text” to mean any sample of language that accomplishes a non hypothetical communicative task. By contrast, “elicitation” (or “elicited data”) refers to samples of language that accomplish hypothetical communicative tasks.
The social task of elicited language samples is to fulfill a metalinguistic request on the part of a linguist, e.g., “How do you say ‘dog’?” The response would not actually refer to any concept, either referential or non-referential. No particular dog or characteristic of dogs in general would be communicated. The task of the response would be to accommodate the inquirer by providing a reasonable analog to some hypothetical utterance in another language. So elicited utterances, like all intentional human behavior, do fulfill tasks. It is just that the communicative tasks they fulfil are “hypothetical,” in the sense just described.
To this point we have been viewing language from a fairly broad structural perspective. In chapter 21 suggested a framework for describing the general morphological characteristics of a language without discussing in detail the meanings of the various morphemes. Chapter 3 presented ways of distinguishing the major grammatical categories of the language, including cataloging the morphosyntactic operations that are associated with each category. However, the precise communicative functions of each operation were not discussed. In chapter 4 we considered constituent order typology, again without treating the functions that alternate constituent orders might have.
Many of the categories, structures, and operations mentioned briefly from a “form first” perspective in the previous three chapters will receive more detailed treatment in the following seven. The present chapter describes tasks, or functions, that tend to be associated with noun phrases, and presents further details concerning how morphosyntactic operations are expressed in noun phrases.
Compounding
A compound is a word that is formed from two or more different words. For example the word windshield is composed of the words wind and shield. Of course, not every sequence of words is a compound. Hence there must be an explicit way of distinguishing compounds from simple sequences of words. The criteria for calling something a compound fall into two groups: (1) formal criteria, and (2) semantic criteria. Compounds may exhibit any of the following formal properties. (1) A stress pattern characteristic of a single word, as opposed to the pattern for two words, e.g., bláckbird (the species), has a different stress pattern than black bird (any bird that happens to be black), cf. also líghthouse keeper vs. light hóusekeeper. (2) Unusual word order, e.g., housekeeper consists of a noun plus a verb where the noun represents the object rather than the subject of the verb. Normally objects come after the verb in English. (3) Morphophonemic processes characteristic of single words, e.g., the word roommate can be pronounced with a single m, whereas normally if two m's come together accidentally in a sentence both are pronounced, e.g., some mice will be understood as some ice if both m's are not pronounced.
Pragmatics is the practice of utterance interpretation (Levinson 1983). Utterances are actual instances of language in use, therefore they\ always occur in a context and their interpretations always affect and are affected by the context. What we will call pragmatic statuses have to do with choices speakers make about how to efficiently adapt their utterances to the context, including the addressee's presumed “mental state.” Like semantic roles, pragmatic statuses are usually, though not always, thought of as characteristics of nominal elements. However, semantic roles are features of the content of the discourse (see section 3.2.0), while pragmatic statuses relate the content to the context. Labels that have been used to describe various pragmatic statuses include: given, new, presupposed, focus, topic, identifiable (or definite), and referential. These terms will be described in the following subsections. But first we will sketch the conceptual background to these pragmatic notions.
People are constantly surrounded by sensory impressions, only a very small portion of which can be attended to at any given moment. Therefore, we have to be selective about which impressions to attend to, and which to ignore. When communicating with other people, we as speakers constantly (1) assess our audience's present mental state, e.g., what they already know, what they are currently attending to, what they are interested in, etc., and (2) construct our message so as to help the audience revise their mental state in the direction we would like it to go. For example, we may highlight items that we want someone to pay attention to, and which we sense he/she is not already paying attention to. Also, we may spend little communicative energy on information which we sense the audience is already thinking about or attending to. The study of how these kinds of highlighting and downplaying tasks affect the structure of linguistic communication is commonly referred to as pragmatics.
It should be pointed out that grammatical relations are one major means of expressing pragmatic information about nominal elements in discourse (see chapter 7). For example, in languages that have a well grammaticalized subject category, subjects tend to be identifiable, given and already available in memory. Direct objects are either given or new in about equal proportions.
In traditional grammar, grammatical categories are called “parts of speech.” Every language has at least two major grammatical categories – noun and verb. Two other major categories, adjective and adverb, may or may not be instantiated in any given language, though they usually are to some extent. Most languages also have minor grammatical categories such as conjunctions, particles, and adpositions. As with most categorization schemes in descriptive linguistics, grammatical categories tend to be interestingly untidy at their boundaries. Nevertheless, core notions, or prototypes, can usually be identified. Another interesting property of grammatical categorization is that the category membership of any given form varies according to how that form is used in discourse (see Hopper and Thompson 1984 and the discussion in sections 5.2 and 9.1 of this book). Such variation in category membership may or may not be directly reflected in the surface morphosyntax. Therefore, sometimes subtle morphosyntactic tests are needed to determine formal category membership, and other times the category membership of a given form can only be inferred from the discourse context.
Grammatical categories are distinct from formal relational categories such as subject, object, and predicate, or functional categories such as AGENT, topic, or definite NP. They are the building blocks of linguistic structure. They are sometimes called “lexical categories” since many forms can be specified for their grammatical category in the lexicon. However, we will not use the term lexical category here because (1) the term grammatical category is more widely understood, and (2) the category of a word depends as much on how the word is used in discourse as on its conventionalized (lexical) meaning.
It is important to present empirical evidence for each grammatical category posited in a grammatical description. Sections 3.1 and 3.2 list and describe the formal characteristics that tend to distinguish nouns and verbs. For the other categories, however, there are too many possible language-specific properties to offer a compendium of all possibilities here.
Discourse is intentional communication among people. Much of human communication involves language, therefore the study of discourse typically involves the study of language. However, discourse and language are two potentially independent fields of investigation. Because they are independent, each can provide evidence for claims made in the other – if they were identical, or notational variants of the same phenomenon, then generalizations made in one domain based on evidence from the other would be meaningless.
For example, AGENT is a concept that is useful in human communication (discourse), AGENTS exist quite apart from language (see section 3.2.0). Subject (as defined in this book), on the other hand, is a linguistic concept. It does not exist apart from its role as a category in linguistic structures. If AGENT and subject were simply two names for the same concept, generalizations such as “in this sentence the AGENT is the subject,” or “AGENT is the primary candidate for subjecthood” would be tautologous. One could not meaningfully explain anything about AG E NT in terms of subject or vice versa.
The term discourse analysis is used in different ways by linguists, anthropologists, sociologists, and philosophers (see Schiffrin 1994 for a survey of approaches to discourse analysis). In this section, I will make an important distinction between linguistic analysis of discourse and discourse interpretation. Much of what has been called discourse analysis in the previous literature would fall under the heading of discourse interpretation in this characterization. For example, if I examine a text and divide it up into “paragraphs” based on my understanding of the propositional information in the text, e.g., when the speaker finishes talking about one thing and begins talking about another, then I am interpreting the text. However, if I look at the same text and divide it up according to the use of certain particles, referential devices, pauses, and intonational patterns, I am engaged in linguistic analysis of the text.
Interpretation certainly has a role in linguistic analysis, but interpretation and analysis are not the same thing. For example, I may interpret the paragraphing in a text based on the propositional content alone.
Grammatical relations (GRs) are often thought of as relations between arguments and predicates in a level of linguistic structure that is independent (or “autonomous”) of semantic and pragmatic influences. For descriptive linguists it is important to recognize that GRs have universal functions in communication, while at the same time defining them in terms of language-specific formal properties. The formal properties that most directly identify GRs are the following:
1 case marking;
2 participant reference marking on verbs;
3 constituent order.
Common terms used to refer to grammatical relations are subject, direct object, indirect object, ergative, and absolutive. The term oblique refers to nominals that lack a GR to some predicate. Explicit definitions and examples of these terms and the ways they are expressed will be given beginning in section 7.1 below. The following discussion will attempt to provide some background and justification for the notion of grammatical relations. This discussion is important to the reader who has serious questions about the “how” and “why” of grammatical relations, but may be skipped by those who want to just get down to the business of describing the system of GRs in a language.
Grammatical expression of semantic roles and pragmatic statuses (see chapters 3 and 10) is understandable in terms of the communicational function of language. However, it is much more difficult to explain GRs in this way. For example, it is intuitively obvious why a language should clearly and easily express the difference between the semantic roles of AGENT and PATIENT – in many communication situations it is highly pertinent to distinguish entities that act from those that are acted upon. If a language did not make this distinction it would be difficult to communicate propositions like “John killed the lion” because there would be no way for the speaker to make it clear who killed whom.
In this chapter we will discuss a collection of operations likely to be expressed in verbs or verb phrases, but not covered in other chapters. The first two, nominalization and compounding, are typically derivational (see section 2.0). The other four – (1) tense/aspect/mode (TAM), (2) location/ direction, (3) participant reference, and (4) evidentiality – are typically inflectional. Many of these operations are likely to be indistinct from each other in any given language. However, because there is a long tradition of describing them separately, it will be convenient to treat them that way in this chapter. It should be kept in mind, however, that in most cases there is significant semantic and morphosyntactic overlap within and among these families of morphosyntactic operations.
Nominalization
Every language has ways of adjusting the grammatical category of a root. For example, a noun can become a verb by a process of verbalization (see section 5.2). Of interest to this section are operations that allow a verb to function as a noun. Such operations are called nominalizations, and can be described with a simple formula:
v → [V]N
Or simply
v → N
A noun may be related to a verb in any number of different ways. For example, one noun may refer to the agent of the action described by the verb, while another refers to the result of the action described by the verb. Typically, a language will employ various nominalization operations that differ functionally according to the resulting noun's semantic relationship to the original verb. In the following sections the major types of nominalizations will be described and exemplified.