To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Do we need another introduction to Indo-European linguistics? Since 1995 four have been published in English (Beekes 1995, Szemerényi 1996, Meier-Brügger 2003, Fortson 2004) and the ground seems to be pretty well covered. This book, however, aims to be an introduction of a different sort. Whereas the works mentioned give up-to-date and (usually) reliable information on the current thinking on what is known in Indo-European studies, here the aim is to present rather areas where there currently is, or ought to be, debate and uncertainty. Whereas previous introductions have aimed for the status of handbooks, reliable guides to the terrain presented in detail, this one aspires more to the status of a toolkit, offering up sample problems and suggesting ways of solving them. The reader who wants to know the details of how labio-velar consonants developed in Indo-European languages or the basis for the reconstruction of the locative plural case ending will not find them here; instead they will be able to review in detail arguments about the categories of the Indo-European verb or the syntax of relative clauses. The result is that this book has shorter chapters on areas such as phonology, where there is now more general agreement in the field, and correspondingly longer sections on areas which are passed by more summarily in other introductions. Memory athletes may be disappointed by the reduction in data, but I hope that others will welcome the increase in argumentation.
Thus far, we have not entirely neglected but certainly down-played the role of the lexicon in speech perception. In chapters 5 and 6 we sought to make a case that speech recognizers must be able to build phonological representations of possible word forms, purely on the basis of acoustic phonetic input. Otherwise, it is difficult to account for the robustness and flexibility of our ‘bottom-up’ speech recognition capabilities. But it is also true that the goal of speech recognition is to identify words in the service of understanding whole utterances, and that there are a host of ‘top-down’ lexical, semantic and discourse effects that arise as a consequence of lexical retrieval mechanisms. Such effects express themselves in (a) the different ways that we respond perceptually to words (e.g. kelp) versus non-words (whether pronounceable like klep – a possible word – or phonotactically illegal, like tlep), (b) neighbourhood effects, arising from the fact that particular words vary in the number of phonologically near neighbours that compete for matching to the acoustic signal, and (c) other effects, such as phoneme restoration (see below), which may or may not be lexical in origin, but nevertheless require explanation.
The account given in previous chapters has characterized speech perception as an active process whereby phonological forms are constructed from speech-specific (phonetic) features in the acoustic signal, via the application of specialized perceptual analysers that exploit tacit knowledge of the sound pattern of the language and the sound production constraints of the human vocal tract.
This book is intended as a self-contained introduction to the study of the language–brain relationship for students of cognitive science, linguistics and speech pathology. The essentially interdisciplinary nature of the subject matter posed considerable difficulties for the author and will likely do so also for the reader. So please be warned. Despite my considerable efforts to keep the pathways open between the villages of the cognate disciplines concerned, the jungle is everywhere and its capacity for re-growth is relentless.
As appropriate for an introductory text, the book is accessible to a wide readership. Foundational concepts and issues on the nature of language, language processing and brain language disorders (aphasiology) are presented in the first four chapters. This section of the book should be complementary with many stand-alone introductory courses in linguistics, psychology or neuroanatomy. Subsequent sections deal with successively ‘higher’ levels of language processing and their respective manifestations in brain damage: speech perception (chapters 5–8); word structure and meaning (lexical processing and its disorders; chapters 9–11); syntax and syntactic disorder (agrammatism; chapters 12–14); discourse and the language of thought disorder (chapters 15–16), followed by a brief final chapter, speculating on unsolved problems and possible ways forward. Each major section of the book begins by posing the principal questions at an intuitive level which is hopefully accessible to all. The often quite specialized research methods by which answers to these questions have been sought are then introduced, in a selective review of the literature.
In the previous chapter, we drew some tentative conclusions and made some quite strong claims about speech perception: that it is a ‘bottom-up’, highly modular process; that the objects of speech perception are abstract, hierarchically structured phonological targets; that speech differs in important respects from other kinds of auditory perception; that special, species-specific neural machinery may be required to support speech perception. It is time to consider more closely the experimental evidence to see if these claims can be substantiated, to examine the tools that have been developed for studying speech perception, and to approach the controversies that currently animate the field. We will not attempt a comprehensive review, but simply explore some long-standing themes and introduce the specialized experimental paradigms with which one needs to be familiar to understand current research.
One of the guiding themes of speech perception research has been the question of whether ‘speech is special’: whether specific adaptation of the perceptual system has occurred with the evolution of human language to support the demands of spoken communication. Several key concepts and experimental paradigms have been developed in an attempt to answer this question. Two early paradigms, dichotic listening and categorical perception, provide the foundational concepts for understanding contemporary issues. Specifically, the dichotic listening paradigm raises questions of hemispheric specialization and cortical mechanisms for speech perception that remain central to contemporary neuroimaging studies.
The previous chapter's discussion of lexical semantics sought to address the fundamental problem of how word meanings are modified by context in sentence processing. These considerations are central to the goal of developing a combinatorial semantics of natural language processing – a task that is beyond the grasp of current theory or computation. However, it is important not to lose sight of the fact that words and idioms (phrase-like chunks of the kick-the-bucket variety) are also discrete linguistic entities, and that isolated word recognition, retrieval and production constitute a quasi-modular component of linguistic competence in its own right. Severe word-finding difficulties constitute a criterial symptom for a diagnosis of anomic aphasia or serve as a sign of incipient Alzheimer's disease. Phonemic or semantic paraphasias are characteristic features of fluent speech production in Wernicke's aphasia and may be accompanied by an agnosia (perceptual deficit) for the phonological form or the meanings of isolated words.
Indeed it has been argued that an initial stage of context-independent word recognition is required, in which all of the possible roles that a given word may play in different linguistic contexts are activated (perhaps in proportion to their likelihood of use), prior to the selective inhibitory or excitatory effects of context which rapidly constrain the system to settle on a dominant interpretation. This in fact was the conclusion to which Swinney (1979) was led in his celebrated ‘bug’ study of CMLP reported previously (chapter 10).
As we indicated in the previous chapter, a breakdown at the discourse level of language comprehension would be expected to reveal itself in difficulties of reference retrieval and failure to successfully construct and maintain a mental model that serves the interlocutors engaged in a particular discourse. Discourse construction, insofar as it involves formulating communicative intentions, reference management and taking account of the listener's perspective, places high demands on working memory and attentional resources. Deficits in these higher cognitive abilities are likely to result in violations of the Gricean pragmatic felicity conditions mentioned in the previous chapter. The spoken language which results from poor discourse model construction or management may manifest itself in incoherent or bizarre speech that is likely to be characterized as ‘thought disordered’ in the psychiatric literature (Andreasen, 1982).
Thought disorder is traditionally clinically characterized in terms of either ‘looseness or bizarreness of association’ between ideas, or as an absence of appropriate expressions which enable the listener to construct a coherent model of what the speaker is talking about. The term formal thought disorder is often used specifically to indicate that what is being referred to is the ‘form’ of thought or its overt expression, and not necessarily a pathology of an underlying cognitive process or condition, which might nevertheless be responsible for the production of thought disordered speech.
There has been much debate about the underlying cognitive pathology of thought disordered speech. The symptom is most closely identified with schizophrenia in its acute phase.
Our discussion thus far has been confined to problems of word recognition and the retrieval of phonological forms from the speech signal. But we have yet to address three core issues of language processing at the lexical level: (1) how word meanings are represented in the mental lexicon; (2) how lexical meanings are assigned to words in the context of sentence processing; and (3) the precise nature of the items which make up the mental lexicon, which we have thus far identified as ‘words’, but have not attempted to define with any precision.
We shall tackle the third of these questions first, the nature of items in the mental lexicon. Perhaps the fundamental issue here is: to what extent do language users decompose words into their constituent morphemes, or minimal units of meaning, as discussed in chapter 2? It is almost universally acknowledged, by linguists and psycholinguists alike, that the units of lexical representation are smaller than words, the units conventionally separated by white space in printed text. Few would argue, for example, that cat and cats, although they are clearly different words, constitute separate entries in the mental lexicon. Rather, cats is a morphological construction, made up of the lexemecat plus the plural inflectional suffix: i.e. cat + s. The assumption here is that in the course of processing words for meaning, listeners ‘strip’ inflectional affixes off word forms to access lexical meanings (Taft and Forster, 1976). But how far does this affix stripping extend?
In the book of Genesis, just after Adam arrives in the Garden of Eden, he displays a command of vocabulary in naming all the animals. A little later he comments on his partner Eve: “This one at last is bone of my bones and flesh of my flesh. This one shall be called Woman, for from man was she taken” (Tanakh, 5). Adam improves substantially after his holophrastic period of animal naming to develop almost immediately a grammar using complete sentences with subordinate clauses, past tense and passive. A literal interpretation of Genesis – whatever literal might mean given the opacity of translations from the original Hebrew and the vagaries of individual exegesis – meets with as much success in describing the development of language as Shelley's account of Frankenstein's reanimation.
How the first humans came to have language is a question that relates both to ontogenetic development (the growth of individual language) and to phylogenetic (growth of the species' language). It is doubtful that we might learn how the first language developed, whether all at once by a mutated gene or in the gradual development of a symbolic system of communication, or by increases in brain size.
Alice Kaplan (1993, 47) describes the initial phase of her time in a French Swiss boarding school when she hears her roommates speaking German, a language she doesn't know. “Then I started discriminating the vowels from the consonants. The same sounds repeated themselves again and again – those were words – and then I could hear the difference between the verbs and the nouns. I heard the articles that went with the nouns, and then I heard where the nouns and the verbs went in sentences.” Kaplan, in this early encounter with German, starts where all language learners must, with the segmentation of the speech stream into phonemes, morphemes, syntactic categories and combinatorial syntax. Her autobiographical account takes her from high school in Switzerland to a position as professor of French literature at Yale University, an odyssey that covers the progressive refinement of her French language skills.
While Kaplan's mastery of French gives her competence comparable to a native speaker, the path to that competence seems to differ from the pattern of L1A. The apparent lack of generalized process and schedule for second language acquisition is at first glance quite different from the tightly choreographed timeline of first language acquisition, to the point that some linguists claim that the two enterprises are not at all similar (Bley-Vroman 1990; Clahsen and Muysken 1996; Meisel 1997a).
In the previous chapter we inquired into the structure of words and the extent to which they can be decomposed into smaller constituents, morphemes. Morphological decomposition was seen to be justified, up to a point, on evidence from cross-modal semantic priming studies. The evidence suggested that morphological decomposition may be justified insofar as the morphological components of a word are semantically transparent, i.e. to the extent that the meaning of the whole word can be clearly related to the meanings of its component morphemes (e.g. indefensible = <not>(<defend>(<able>))). However, we did not provide an explicit account of ‘semantic transparency’, other than to appeal to language users' intuitions about the meanings of words. A theory of lexical semantics should provide an explicit account of word meaning; of how similarities and differences in word meaning are established, how various word meaning relations, such as synonymy (violin – fiddle), antonymy (long – short), hyponymy (horse – animal) etc., are established.
We defined morphology as the syntax of the word. This chapter concerns the semantics of words or word meanings. A useful theory of lexical semantics needs to account not only for the meaning of individual words but for how word meanings change in context with other words. Consider the meaning of good in the phrase good friend (<loyal, reliable>). Now consider the meaning of the same word in the phrase good lover or good meal.
In the previous chapter we outlined two opposing theories of the role that syntactic processing plays in sentence comprehension. According to one view – the modular theory, inspired by early psycholinguistic attempts to apply Chomsky's generative grammar – a specialized syntactic parser assigns grammatical structure to an input sentence, yielding an intermediate representation which strongly constrains the assignment of meaning, but which needs to be further operated upon by interpretive (semantic and pragmatic) processes to yield the full meaning of the utterance. According to the opposing view, dubbed the interactive model, sentence meanings are assigned incrementally to word sequences as soon as they are identified, making maximal use of whatever constraints can be applied from the speakers' tacit knowledge of the grammar of their language, pragmatic knowledge and expectations, or even collocational restrictions on word usage (such as habitual phrases or idioms). Sometimes these cues will conflict, in which case constraints may compete to produce local ambiguities which are usually resolved by further input.
In principle, it should be possible to decide between these opposing models (or some intermediate theory between the two) if we had some means of observing changes in state of the language processor as it steps through the input sentence in real time. We may never fully achieve this privileged perspective, but over the past two or three decades a variety of ‘on-line’ techniques, based initially upon behavioural reaction time measurements and latterly upon functional neural imaging techniques, have been devised, which arguably enable us to observe local fluctuations in ‘processing load’, as sentences are judged or comprehended in real time.
This book is about language processing in the human brain and, more specifically, what happens to spoken language when certain areas of the brain are damaged. Language processing is what takes place whenever we understand or produce speech; a mundane task, but one of extraordinary complexity, whose mysteries have baffled some of the greatest minds across the centuries.
Neurolinguistics is the technical term for this field, introduced into academic usage by Harry Whitaker (1971), who founded the leading journal that bears this title. As Whitaker noted at the time, it is a key assumption of neurolinguistics that ‘a proper and adequate understanding of language depends upon correlating information from a variety of fields concerned with the structure and function of both language and brain, minimally neurology and linguistics’. Today, some thirty years later, it seems necessary to add ‘cognition’ or cognitive science to the list of minimally necessary disciplines. A well-articulated cognitive science is needed to provide the hoped for integration of two otherwise very different fields of study: language and neurobiology.
Considerable progress and a vast body of research have accumulated since then. Yet leading advocates of the cognitive science perspective on language as a biologically grounded human ability (such as Chomsky, Pinker and Deacon, to mention just three) disagree on some fundamental questions. To what extent are our language learning capabilities ‘hard-wired’ into the human brain and unique to the species? How is ‘innate linguistic competence’ actually deployed in language learning?