To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Throughout Chapter 5, we approached the description of processes involved in word formation as if the unit called the “word” was always a regular and easily identifiable form, even when it is a form such as ambimoustrous that we may never have seen before. This new word is based on an established form, ambidextrous (“able to use either hand equally well”), with the middle element, dext(e)r (“right hand”), replaced by mous(e). Clearly this single word has more than one element contributing to its meaning. Yet we don’t normally think of a “word” as having internal elements. We tend to think of words as those individual forms marked in black with bigger spaces separating them in written English. In this chapter, we’ll investigate ways of taking a closer look inside words.
This chapter discusses different theoretical perspectives on modeling language understanding and learning. The first section discusses the role of rules in learning and understanding a language. In the second section, we introduce Fodor's argument that natural language learning requires an innate representational system which he calls the language of thought. The third section looks at a very different approach involving connectionist neural networks. Connectionists argue that the trajectory of language learning can be simulated in networks that lack explicitly encoded linguistic rules – e.g. in neural networks that can be trained to learn the tenses of English words. The last section looks at the Bayesian approach to language learning.
Some children grow up in a supportive social environment where more than one language is used and are able to acquire a second language in circumstances similar to those of first language acquisition. However, most of us are not exposed to a second language until much later and, like David Sedaris, our ability to use a second language, even after years of study, rarely matches ability in our first language.
This origin story from the Iwaidja people of Australia, illustrated in the painting above, offers an explanation of not only where language came from, but also why there are so many different languages. Among the English-speaking people, there have been multiple attempts to provide a comparable explanation, but not much proof to support any of them. Instead of a belief in a single mythical earth mother, we have a variety of possible beliefs, all fairly speculative.
This chapter addresses influential computational models of the mind. In the first section, we look at the physical symbol system hypothesis proposed by Herbert Simon and Allen Newell. The hypothesis suggests that thinking is a process of manipulating symbol structures according to well-defined rules. Next, we introduce Jerry Fodor's language of thought hypothesis which proposes that thinking has a language-like grammatical structure. We explain why the language of thought hypothesis is a concrete practice of the physical symbol system hypothesis in the human cognitive system. The last section explores an argument against the physical symbol theory developed by John Searle. His Russian (Chinese) room argument is intended to refute the claim that manipulating symbols is sufficient for intelligence.
We can define writing as the symbolic representation of language through the use of graphic signs. Unlike speech, it is a system that is not simply acquired, but has to be learned through sustained conscious effort. Not all languages have a written form and, even among people whose language has a well-established writing system, there are large numbers of individuals who cannot use the system.
In Chapter 7, we moved from the general categories of traditional grammar to more specific methods of describing the structure of phrases and sentences. When we concentrate on the structure and ordering of components within a sentence, we are studying the syntax of a language. The word “syntax” comes originally from Greek and literally means “a putting together” or “arrangement.” In earlier approaches, there was an attempt to produce an accurate description of the sequence or ordering “arrangement” of elements in the linear structure of the sentence. In more recent attempts to analyze structure, there has been a greater focus on the underlying rule system that is the basis of that linear structure.
Previous chapters have all developed in different ways the core idea that cognition is information processing. This chapter looks at a very different approach, using dynamical systems theory's mathematical and conceptual tools to model cognitive skills and abilities. The first section explains how how dynamical systems theory can describe cognitive skills and abilities without using the framework of representation and information processing. The second section examines how dynamical systems theory explains two examples of child development, with particular attention to the time-sensitive nature of the dynamic system theory in these examples.
Building on the discussion of neuroanatomy in chapter 3, this chapter explores how the brain is wired. The first section looks at brain maps developed to clarify the relationship between structure and function in the brain and based on anatomical connectivity research. The second section introduces neurophysiological techniques, including EEG, MEG, PET, and fMRI, which allow cognitive scientists to map brain functions and connectivities. Then we discuss these techniques' temporal and spatial resolution to see their different strengths and weaknesses in cognitive neuroscience studies. In the following two sections, we look at two cases combining multiple techniques to explore the mechanism of visual attention in the brain. Finally, the last section discusses some reasons for caution when interpreting neural imaging data.
This chapter introduces the implementations of artificial agents in robotics. The first section looks at the early development of robotics in GOFAI (Good Old-Fashioned AI). SHAKEY is a representative example designed to operate and perform simple tasks in the real world, which illustrates the physical symbol system hypothesis. The second section introduces alternative ideas from situated cognition theorists. The ideas are inspired by studies on simple cognitive systems such as those of insects, to pursue simple architecture robotics that can solve complex problems. The third section reviews how these theoretical ideas have been translated into particular robotic architectures, focusing on subsumption architectures and some examples of behavior-based robotics.
In the preceding chapters we have reviewed in some detail the various features of language that people use to produce and understand linguistic messages. Where is this ability to use language located? The obvious answer is “in the brain.” However, it can’t be just anywhere in the brain. For example, it can’t be where damage was done to the right hemisphere of the patient’s brain in Alice Flaherty’s description. The woman could no longer recognize her own leg, but she could still talk about it. The ability to talk was unimpaired and hence clearly located somewhere else in her brain.