To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Samuel J. Supalla, Associate Professor in the Department of Special Education, Rehabilitation, and School Psychology University of Arizona,
Cecile McKee, Associate Professor University of Arizona
A pressing question related to the well-being of deaf children is how they develop a strong language base (e.g. Liben 1978). First or native language proficiency plays a vital role in many aspects of their development, ranging from social development to educational attainment to their learning of a second language. The target linguistic system should be easy to learn and use. A natural signed language is clearly a good choice for deaf children. While spoken English is a natural language, it is less obvious that a signed form of English is also a natural language. At issue is the development of Manually Coded English (MCE), which can be described as a form of language planning aimed at making English visible for deaf children (Ramsey 1989). MCE demonstrates a living experiment in which deaf children are expected to learn signed English as well as hearing children do spoken English. If MCE is a natural language, learning it should be effortless, with learning patterns consistent with what we know about natural language acquisition in general.
American Sign Language (ASL) is a good example of how a sign system is defined as a natural language with the capacity of becoming a native language for deaf children, especially those of deaf parents who use ASL at home (Newport and Meier 1985; Meier 1991). However appropriate ASL is for deaf children of deaf parents, it is not the case that all deaf children are exposed to ASL.
By
Terry Janzen, Assistant Professor of Linguistics University of Manitoba in Winnipeg, Canada,
Barbara Shaffer, Assistant Professor of Linguistics University of New Mexico
Grammaticization is the diachronic process by which:
lexical morphemes in a language, such as nouns and verbs, develop over time into grammatical morphemes; or
morphemes less grammatical in nature, such as auxiliaries, develop into ones more grammatical, such as tense or aspect markers (Bybee et al. 1994).
Thus any given grammatical item, even viewed synchronically, is understood to have an evolutionary history. The development of grammar may be traced along grammaticization pathways, with vestiges of each stage often remaining in the current grammar (Hopper 1991; Bybee et al. 1994), so that even synchronically, lexical and grammatical items that share similar form can be shown to be related. Grammaticization is thought to be a universal process; this is how grammar develops. Bybee et al. claim that this process is regular and has predictable evidence, found in the two broad categories of phonology and semantics. Semantic generalization occurs as the more lexical morpheme loses some of its specificity and, usually along with a particular construction it is found within, can be more broadly applied. Certain components of the meaning are lost when this generalization takes place. Regarding phonological change, grammaticizing elements and the constructions they occur in tend to undergo phonological reduction at a faster rate than lexical elements not involved in grammaticization.
The ultimate source of grammaticized forms in languages is understood to be lexical. Most commonly, the source categories are nouns and verbs. Thus, the origins of numerous grammatical elements, at least for spoken languages, are former lexical items.
Sign languages are produced and perceived in the visual modality, while spoken languages are produced and perceived in the auditory modality. Does this difference in modality have any effect on the structures of these two types of languages? Much of the research on the structure of sign languages has mentioned this issue, but it is far from resolved. To some authors, the differences between sign languages and spoken languages are paramount, because the study of “modality effects” is a contribution which sign language research uniquely can make. To others, the similarities between sign languages and spoken languages are most important, for they can tell us how certain properties of linguistic systems transcend modality and are, therefore, truly universal. Of course, both of these goals are worthy, and this book is testimony to the fruits that such endeavors can yield.
In this chapter I address the question of modality effects by first examining the architecture of the language faculty. By laying out my assumptions about how language works in the general sense, predictions about the locus of modality effects can be made. I then take up an issue that is a strong candidate for a modality effect: the use of space for indicating reference in pronouns and verbs. I review some of the issues that have been discussed with respect to this phenomenon, and offer an analysis that is in keeping with the theoretical framework set up at the beginning.
By
Annette Hohenberger, Research assistant University of Frankfurt, Germany,
Daniela Happ, Deaf research assistant University of Frankfurt, Germany,
Helen Leuninger, Professor of Linguistics University of Frankfurt, Germany
In the present study, we investigate both slips of the hand and slips of the tongue in order to assess modality-dependent and -independent effects in language production. As a broader framework, we adopt the paradigm of generative grammar, as it has developed over the past 40 years (Chomsky 1965; 1995, and related work of other generativists). Generative Grammar focuses on both universal and language-particular aspects of language. The universal characteristics of language are known as Universal Grammar (henceforth, UG). UG defines the format of possible human languages and delimits the range of possible variation between languages. We assume that languages are represented and processed by one and the same language module (Fodor 1983), no matter what modality they use. UG is neutral with regard to the modality in which a particular language is processed (Crain and Lillo-Martin 1999).
By adopting a psycholinguistic perspective, we ask how a speaker's or signer's knowledge of language is put to use during the production of language. So far, models of language production have mainly been developed on the basis of spoken languages ((Fromkin 1973; Garrett 1975, 1980, Butterworth 1980 Levelt 1989, Levelt, Roelofs and Meyer 1999, Dell 1986, Dell and Reich 1981, MacKay 1987, Stemberger 1985). But, even the set of spoken languages investigated so far is restricted (with a clear focus on English). Thus, Levelt, Roelofs and Meyer (1999: 36) challenge researchers to consider a greater variety of (spoken) languages in order to broaden the empirical basis for valid theoretical inductions.
By
Gary Morgan, Lecturer in Developmental Psychology City University, London,
Neil Smith, Professor of Linguistics University College London,
Ianthi Tsimpli, Associate Professor at the English Department Aristotle University of Thessaloniki,
Bencie Woll, Chair in Sign Language and Deaf Studies City University London
This chapter reports on the findings of an experiment into the learning of British Sign Language (BSL) by Christopher, a linguistic savant, and a control group of talented second language learners. The results from tests of comprehension and production of morphology and syntax, together with observations of his conversational abilities and judgments of grammaticality, indicate that despite his dyspraxia and visuo-spatial impairments, Christopher approaches the task of learning BSL in a way largely comparable to that in which he has learned spoken languages. However, his learning of BSL is not uniformly successful. Although Christopher approaches BSL as linguistic input, rather than purely visuo-spatial information, he fails to learn completely those parts of BSL for which an intact nonlinguistic visuo-spatial domain is required (e.g. the BSL classifier system). The unevenness of his learning supports the view that only some parts of language are modality-free.
Accordingly, this case illuminates crossmodality issues, in particular, the relationship of sign language structures and visuo-spatial skills. By exploring features of Christopher's signing and comparing it to normal sign learners, new insights can be gained into linguistic structures on the one hand and the cognitive prerequisites for the processing of signed language on the other.
In earlier work (see Smith and Tsimpli 1995 and references therein; also Tsimpli and Smith 1995; 1998; Smith 1996; Smith and Tsimpli 1996; 1997; Morgan, Smith, Tsimpli, and Woll 2002), we have documented the unique language learning abilities of a polyglot savant, Christopher (date of birth: January, 6 1962).
Liddell's proposal that there are gestures in agreement verbs
Forty years of research on signed languages has revealed the unquestionable fact that signers construct their utterances in a structured way from units that are defined within a language system. They do not pantomime or “draw pictures in the air.” But does this mean that every aspect of a signed articulation should have the same status as a linguistic unit?
A proposal by Liddell (1995; 1996; Liddell and Metzger 1998) has brought the issue of the linguistic status of certain parts of American Sign Language (ASL) utterances to the fore. He proposes that agreement verbs are not verbs simultaneously articulated with agreement morphemes, but verbs simultaneously articulated with pointing gestures. Agreement verbs are verbs that move to locations in signing space associated with particular referents in the discourse. A signer may establish a man on the left side at location x and a woman on the right side at location y. Then, to sign ‘He asks her,’ the signer moves the lexical sign ASK from location x to location y. The locations in these constructions have been analyzed as agreement morphemes (Fischer and Gough 1978; Klima and Bellugi 1979; Padden 1988; Liddell and Johnson 1989; Lillo-Martin and Klima 1990; Aarons et al. 1992) that combine with the lexical verb to form a multimorphemic sign x ASKy.
By
Richard P. Meier, Professor of Linguistics and Psychology University of Texas at Austin,
Kearsy Cormier, Doctorate in linguistics University of Texas at Austin,
David Quinto-Pozos, Teacher Department of Linguistics at the University of Pittsburgh
The term “gesture” is used to denote various human actions. This is even true among linguists and psychologists, who for the past two decades or more have highlighted the importance of gestures of various sorts and their significant role in language production and reception. Some writers have defined gestures as the movements of the hands and arms that accompany speech. Others refer to the articulatory movements of speech as vocal gestures and those of signed languages as manual gestures. For those who work in signed languages, the term nonmanual gesture usually refers to facial expressions, head and torso movements, and eye gaze, all of which are vital parts of signed messages. In the study of child language acquisition, some authors have referred to an infant's reaches, points, and waves as prelinguistic gesture.
In Part II we introduce two works that highlight the importance of the study of gesture and one that addresses iconicity (a closely related topic). We also briefly summarize some of the various ways in which gesture has been defined and investigated over the last decade. A few pages of introductory text are not enough to review all the issues that have arisen – especially within the last few years – concerning gesture and iconicity and their role in language, but this introduction is intended to give the reader an idea of the breadth and complexity of these topics.
By
Christian Rathmann, Doctoral student in linguistics University of Texas at Austin,
Gaurav Mathur, Postdoctoral fellow Haskins Laboratories in New Haven, CT
One major question in linguistics is whether the universals among spoken languages are the same as those among signed languages. Two types of universals have been distinguished: formal universals, which impose abstract conditions on all languages, and substantive universals, which fix the choices that a language makes for a particular aspect of grammar (Chomsky 1965; Greenberg 1966; Comrie 1981). It would be intriguing to see if there are modality differences in both types of universals. Fischer (1974) has suggested that formal universals like some syntactic operations apply in both modalities, while some substantive universals are modality specific. Similarly, Newport and Supalla (2000:112) have noted that signed and spoken languages may have some different universals due to the different modalities.
In this chapter we focus on verb agreement as it provides a window into some of the universals within and across the two modalities. We start with a working definition of agreement for spoken languages and illustrate the difficulty in applying such a definition to signed languages. We then embark on two goals: to investigate the linguistic status of verb agreement in signed language and to understand the architecture of grammar with respect to verb agreement. We explore possible modality differences and consider their effects on the nature of the morphological processes involved in verb agreement. Finally, we return to the formal and substantive universals that separate and/or group spoken and signed languages.
Someone idly thumbing through an English dictionary might observe two characteristics of repetition in words. First, segments can vary in the number of times they repeat. In no, Nancy, unintended, and untentional, /n/ occurs one, two, three, and four times respectively. In the minimal triplet odder, dodder, and doddered, /d/ occurs one, two, and three times.
A second characteristic is that words repeat rhythmically or irregularly:
Rhythmic repetition: All the segments of a word can be temporally sliced to form at least two identical subunits, with patterns like aa, abab, and ababab. Examples: tutu (abab), murmur (abcabc).
Irregular repetition: any other segment repetition, such as abba, aabb, abca, etc. Examples: tint (abca), murmuring (abcabcde).
If asked to comment on these two characteristics, a phonologist might shrug and quote from a phonology textbook:
[An] efficient system would stipulate a small number of basic atoms and some simple method for combining them to produce structured wholes. For example, two iterations of a concatenation operation on an inventory of 10 elements … will distinguish 103 items … As a first approximation, it can be said that every language organizes its lexicon in this basic fashion. A certain set of speech sounds is stipulated as raw material. Distinct lexical items are constructed by chaining these elements together like beads on a string.
(Kenstowicz 1994:13)
Repetition, the phonologist might say, is the meaningless result of the fact that words have temporal sequences constructed from segments.
By
Richard P. Meier, Professor of Linguistics and Psychology University of Texas at Austin,
Kearsy Cormier, Doctorate in linguistics University of Texas at Austin,
David Quinto-Pozos, Teacher Department of Linguistics at the University of Pittsburgh
This is a book primarily about signed languages, but it is not a book targeted just at the community of linguists and psycholinguists who specialize in research on signed languages. It is instead a book in which data from signed languages are recruited in pursuit of the goal of answering a fundamental question about the nature of human language: what are the effects and non-effects of modality upon linguistic structure? By modality, I and the other authors represented in this book mean the mode – the means – by which language is produced and perceived. As anyone familiar with recent linguistic research – or even with popular culture – must know, there are at least two language modalities, the auditory–vocal modality of spoken languages and the visual–gestural modality of signed languages. Here I seek to provide a historical perspective on the issue of language and modality, as well to provide background for those who are not especially familiar with the sign literature. I also suggest some sources of modality effects and their potential consequences for the structure of language.
What's the same?
Systematic research on the signed languages of the Deaf has a short history. In 1933, even as eminent a linguist as Leonard Bloomfield (1933:39) could write with assurance that:
Some communities have a gesture language which upon occasion they use instead of speech. Such gesture languages have been observed among the lower-class Neapolitans, among Trappist monks (who have made a vow of silence), among the Indians of our western plains (where tribes of different language met in commerce and war), and among groups of deaf-mutes. […]
Most spoken languages encode spatial relations with prepositions or locative affixes. Often there is a single grammatical element that denotes the spatial relation between a figure and ground object; for example, the English spatial preposition on indicates support and contact, as in The cup is on the table. The prepositional phrase on the table defines a spatial region in terms of a ground object (the table), and the figure (the cut) is located in that region (Talmy 2000). Spatial relations can also be expressed by compound phrases such as to the left or in back of. Both simple and compound prepositions constitute a closed class set of grammatical forms for English. In contrast, signed languages convey spatial information using so-called classifier constructions in which spatial relations are expressed by where the hands are placed in the signing space or in relationship to the body (e.g. Supalla 1982; Engberg-Pedersen 1993). For example, to indicate ‘The cup is on the table,’ an American Sign Language (ASL) signer would place a C classifier handshape (referring to the cup) on top of a B classifier handshape (referring to the table). There is no grammatical element specifying the figure–ground relation; rather, there is a schematic and isomorphic mapping between the location of the hands in signing space and the location of the objects described (Emmorey and Herzig in press). This chapter explores some of the ramifications of this spatialized form for how signers talk about spatial environments in conversations.
By
Richard P. Meier, Professor of Linguistics and Psychology University of Texas at Austin,
Kearsy Cormier, Doctorate in linguistics University of Texas at Austin,
David Quinto-Pozos, Teacher Department of Linguistics at the University of Pittsburgh
Within the past 30 years, syntactic phenomena within signed languages have been studied fairly extensively. American Sign Language (ASL) in particular has been analyzed within the framework of relational grammar (Padden 1983), lexicalist frameworks (Cormier 1998, Cormier et al. 1999), discourse representation theory (Lillo-Martin and Klima 1990), and perhaps most widely in generative and minimalist frameworks (Lillo-Martin 1986; Lillo-Martin 1991; Neidle et al. 2000). Many of these analyses of ASL satisfy various syntactic principles and constraints that are generally taken to be universal for spoken languages (Lillo-Martin 1997). Such principles include Ross's (1967) Complex NP Constraint (Fischer 1974), Ross's Coordinate Structure Constraint (Padden 1983), Wh-Island Constraint, Subjacency, and the Empty Category Principle (Lillo-Martin 1991; Romano 1991).
The level of syntax and phrase structure is where sequentiality is perhaps most obvious in signed languages, and this may be one reason why we can fairly straightforwardly apply many of these syntactic principles to signed languages. Indeed, the overall consensus seems to be that the visual–gestural modality of signed languages results in very few differences between the syntactic structure of signed languages and that of spoken languages.
The three chapters in this section support this general assumption, revealing minimal modality effects at the syntactic level. Those differences that do emerge seem to based on the use of the signing space (as noted in Lillo-Martin's chapter; Chapter 10) or on nonmanual signals (as noted in the Pfau and Tang and Sze chapters; Chapters 11 and 12).
In this chapter it is taken as given that phonology is the level of grammatical analysis where primitive structural units without meaning are combined to create an infinite number of meaningful utterances. It is the level of grammar that has a direct link with the articulatory and perceptual phonetic systems, either visual–gestural or auditory–vocal. There has been work on sign language phonology for about 40 years now, and at the beginning of just about every piece on the topic there is some statement like the following:
The goal is, then, to propose a model of ASL [American Sign Language] grammar at a level that is clearly constrained by both the physiology and by the grammatical rules. To the extent that this enterprise is successful, it will enable us to closely compare the structures of spoken and signed languages and begin to address the broader questions of language universals …
(Sandler 1989: vi)
The goal of this chapter is to articulate some of the differences between the phonology of signed and spoken languages that have been brought to light in the last 40 years and to illuminate the role that the physiological bases have in defining abstract units, such as the segment, syllable, and word. There are some who hold a view that sign languages are just like spoken languages except for the substance of the features (Perlmutter 1992).