To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consistent with the classic juxtaposition of reason and emotion, moods and emotions have long been assumed to interfere with problem solving. Recent advances in psychology's understanding of the interplay of feeling and thinking suggest a more complex story: Positive as well as negative moods and emotions can facilitate as well as inhibit problem solving, depending on the nature of the task. Moreover, the same feeling may have differential effects at different stages of the problem-solving process. In addition, nonaffective feelings, such as bodily sensations and cognitive experiences (e.g., fluency of recall or perception), may also influence problem solving, often paralleling the effects observed for affective feelings. This chapter summarizes key lessons learned about the interplay of feeling and thinking and addresses their implications for problem solving. To set the stage, we begin with a summary of key elements of the problem-solving process.
ELEMENTS OF PROBLEM SOLVING
In the most general sense, “a problem arises when we have a goal – a state of affairs that we want to achieve – and it is not immediately apparent how the goal can be attained” (Holyoak, 1995, p. 269). Consistent with the spatial metaphors of ordinary language use, where we “search for a way to reach the goal,” “get lost” in a problem, meet “roadblocks” or have to “backtrack,” problem solving is typically conceptualized as search through a metaphorical space (Duncker, 1945).
What are the problems that you are currently trying to solve in your life? Most of us have problems that have been posed to us (e.g., assignments from our supervisors). But we also recognize problems on our own (e.g., you might have noticed the need for additional parking space in the city where you work). After identifying the existence of a problem, we must define its scope and goals. The problem of parking space is often seen as a need for more parking lots or parking garages. However, in order to solve this problem creatively, it may be useful to turn it around and redefine it as a problem of too many vehicles requiring a space in which to sit during the workday. In that case, you may be prompted to redefine the problem: You decide to organize a carpool among people who use downtown parking lots and institute a daytime local taxi service using these privately owned vehicles. Thus, you solve the problem not as you originally posed it but as you later reconceived it.
Problem solving does not usually begin with a clear statement of the problem; rather, most problems must be identified in the environment; then they must be defined and represented mentally. The focus of this chapter is on these early stages of problem solving: problem recognition, problem definition, and problem representation.
THE PROBLEM-SOLVING CYCLE
Psychologists have described the problem-solving process in terms of a cycle (Bransford & Stein, 1993; Hayes, 1989; Sternberg, 1986).
The combination of moment-to-moment awareness and instant retrieval of archived information constitutes what is called the working memory, perhaps the most significant achievement of human mental evolution.
(Goldman-Rakic, 1992, p. 111)
Working memory plays an essential role in complex cognition. Everyday cognitive tasks – such as reading a newspaper article, calculating the appropriate amount to tip in a restaurant, mentally rearranging furniture in one's living room to create space for a new sofa, and comparing and contrasting various attributes of different apartments to decide which to rent – often involve multiple steps with intermediate results that need to be kept in mind temporarily to accomplish the task at hand successfully.
(Shah & Miyake, 1999, p. 1)
More than 25 years ago, Baddeley and Hitch (1974) lamented, “Despite more than a decade of intensive research on the topic of short-term memory (STM), we still know virtually nothing about its role in normal information processing” (p. 47). The primary concern for Baddeley and Hitch was the presumed centrality of limited-capacity short-term memory in contemporary models of memory, including Atkinson and Shiffrin's (1968) “modal model.” For example, Baddeley and Hitch described a patient with brain-damage (K.F.) who exhibited grossly deficient performance on tests of short-term memory but normal performance on long-term learning tasks. Logically, this could not occur if information passes from short-term memory to long-term memory.
Imagine you are elected mayor of a town and are given absolute power over all town resources. You may hire workers for the local factory, raise taxes, have schools built, and close down local businesses. The one goal you are to strive for is to make certain that the town prospers.
A situation like this, simulated on a computer, was used in the early 1980s by Dietrich Dörner and his colleagues (e.g., Dörner & Kreuzig, 1983; Dörner, Kreuzig, Reither, & Stäudel, 1983) in Bamberg, Germany, to study individual differences in the human ability to solve complex problems. Dörner was interested in understanding why some of his research participants were much more successful in building prosperous towns than were others. One of his rather striking and hotly debated conclusions was that individual differences in the ability to govern the simulated town were not at all related to the individuals' IQs. Rather, an individual's ability to turn the town into a prosperous community seemed to be related to his or her extroversion and self-confidence.
In this chapter we are concerned with the question of what determines individual differences in complex problem-solving competence. The answer to this question may be traced from many different viewpoints: cognitive, social, biological, and evolutionary, to name just a few. Here, we focus on the contribution of cognitive psychology to providing an answer to the question.
Imagine that on some unlucky weekday morning your coffee machine breaks down. For many of us, notably the authors of this chapter, this would be a serious problem. One way of solving this urgent problem is by physically fumbling with the broken machine, trying to fix it by using problem-solving heuristics or sheer trial and error. As an alternative, text would come to the rescue. You could read the manual that came with the machine. You could look in a book on household repairs. You could also consult the Internet. When you pursue any of these latter options, you must be able to comprehend the text and apply what you have learned. This chapter explores the factors that predict your success in solving problems such as the broken coffee machine after reading a text.
Whenever texts are successfully used to solve a problem, the solver must accurately represent both the problem and the messages presented in the texts. A text representation is a cognitive representation that has some reference to elements, features, or structural patterns in the explicit text. Many factors contribute to the construction of text representations. To put it simply, two classes of factors are properties of the text (such as its organization into topics, subtopics, and sentences) and properties of the reader (such as domain-specific knowledge and general reading skill). One of the central assumptions of this chapter is that text representations constrain problem solving.
By
Janet E. Davidson, Associate Professor of Psychology, Lewis & Clark College,
Robert J. Sternberg, IBM Professor of Psychology and Education, Yale University; Director, Yale Center for the Psychology of Abilities, Competencies and Expertise (PACE Center)
By
Janet E. Davidson, Associate Professor of Psychology, Lewis & Clark College,
Robert J. Sternberg, IBM Professor of Psychology and Education, Yale University; Director, Yale Center for the Psychology of Abilities, Competencies and Expertise (PACE Center)
Almost everything in life is a problem. Even when we go on vacations to escape our problems, we quickly discover that vacations merely bring problems that differ in kind or magnitude from the ones of daily living. In addition, we often find that the solution to one problem becomes the basis of the next one. For example, closing on a house solves the problem of buying a house, but usually means the initiation of a whole new set of problems pertaining to home ownership.
Because problems are a central part of human life, it is important to understand the nature of problem solving and the sources that can make it difficult. When people have problems, how do they identify, define, and solve them? When and why do they succeed at problem solving and when and why do they fail? How can problem-solving performance be improved?
Our goal for this book is to organize in one volume what psychologists know about problem solving and the factors that contribute to its success or failure. To accomplish this goal, we gave each of our contributors the following problem: “Use your area of expertise to determine what makes problem solving difficult.” By examining why problem solving is often difficult for people, we hope to discover how to make it easier and more productive. However, the book's focus is not a discouraging one that emphasizes only failures in problem solving. Instead, it provides a balanced view of why problems are and are not solved successfully.
By
Anne-Marie P. Guerra Currie, STI Healthcare, Inc. in Austin,
Richard P. Meier, Professor of Linguistics and Psychology University of Texas at Austin,
Keith Walters, Associate Professor of Linguistics, Anthropology, and Middle Eastern Studies University of Texas at Austin
Crosslinguistic and crossmodality research has proven to be crucial in understanding the nature of language. In this chapter we seek to contribute to crosslinguistic sign language research and discuss how this research intersects with comparisons across spoken languages. Our point of departure is a series of three pair-wise comparisons between elicited samples of the vocabularies of Mexican Sign Language (la Lengua de Señas Mexicana or LSM) and French Sign Language (la Langue des Signes Française or LSF), Spanish Sign Language (la Lengua de Signos Española or LSE), and Japanese Sign Language (Nihon Syuwa or NS). We examine the extent to which these sample vocabularies resemble each other. Writing about “sound–meaning resemblances” across spoken languages, Greenberg (1957:37) posits that such resemblances are due to four types of causes. Two are historical: genetic relationship and borrowing. The other two are connected to nonhistorical factors: chance and shared symbolism, which we here use to mean that a pair of words happens to share the same motivation, whether iconic or indexic. These four causes are likely to apply to sign languages as well, although – as we point out below – a genetic linguistic relationship may not be the most appropriate account of the development of three of the sign languages discussed in this chapter: LSF, LSM, and LSE.
The history of deaf education through the medium of signs in Mexico sheds light on why the three specific pair-wise comparisons that form the basis of this study are informative.
Signed language research in recent decades has revealed that signed and spoken languages share many properties of natural language, such as duality of patterning and linguistic arbitrariness. However, the fact that there are fundamental differences between the oral–aural and visual–gestural modes of communication leads to the question of the effect of modality on linguistic structure. Various researchers have argued that, despite some superficial differences, signed languages also display the property of formal structuring at various levels of grammar and a similar language acquisition timetable, suggesting that the principles and parameters of Universal Grammar (UG) apply across modalities (Brentari 1998; Crain and Lillo-Martin 1999; Lillo-Martin 1999). The fact that signed and spoken languages share the same kind of cognitive systems and reflect the same kind of mental operations was suggested by Fromkin (1973), who also argued that having these similarities does not mean that the differences resulting from their different modalities are uninteresting. Meier (this volume) compares the intrinsic characteristics of the two modalities and suggests some plausible linguistic outcomes. He also comments that the opportunity to study other signed languages in addition to American Sign Language (ASL) offers a more solid basis to examine this issue more systematically.
This chapter suggests that a potential source of modality effect may lie in the use of space in the linguistic and discourse organization of nominal expressions in signed language.
By
Richard P. Meier, Professor of Linguistics and Psychology University of Texas at Austin,
Kearsy Cormier, Doctorate in linguistics University of Texas at Austin,
David Quinto-Pozos, Teacher Department of Linguistics at the University of Pittsburgh
The hands of a signer move within a three-dimensional space. Some signs contact places on the body that are near the top of the so-called signing space. Thus, the American Sign Language (ASL) signs FATHER, BLACK, SUMMER, INDIA, and APHASIA all contact the center of the signer's forehead. Other signs contact body regions low in the signing space: RUSSIA, NAVY, and DIAPER target locations at or near the signer's waist. Still other signs move from location to location within space: the dominant hand of SISTER moves from the signer's cheek to contact with the signer's nondominant hand; that nondominant hand is located in the “neutral space” in front of the signer's torso. In the sign WEEK, the dominant hand (with its extended index finger) moves across the flat palm of the nondominant hand. As these examples indicate, articulating the signs of ASL requires that the hands be placed in space and be moved through space.
Is this, however, different from the articulation of speech? The oral articulators also move in space: the mouth opens and closes, the tongue tip and tongue body move within the oral cavity, and the velum is raised and lowered. Yet the very small articulatory space of speech is largely hidden within our cheeks, meaning that the actions of the oral articulators occur largely (but not entirely) out of sight. In contrast, the actions of the arms and hands are there for everyone to see.
A Deaf-Blind person has only one channel through which conventional language can be communicated, and that channel is touch. Thus, if a Deaf-Blind person uses signed language for communication, he must place his hands on top of the signer's hands and follow that signer's hands as they form various handshapes and move through the signing space. A sign language such as American Sign Language (ASL) that is generally perceived through vision must, in this case, be perceived through touch.
Given that contact between the signer's hands and the receiver's hands is necessary for the Deaf-Blind person to perceive a signed language, we may wonder about the absence of the nonmanual signals of visual–gestural language (e.g. eyebrow shifts, head orientation, eye gaze). These elements play a significant role in the grammar of signed languages, often allowing for the occurrence of various word orders and syntactic structures. One of the central questions motivating this study was how the absence of such nonmanual elements might influence the form that tactile-gestural language takes.
Thus, this study began as an effort to describe the signed language production of Deaf-Blind individuals with a focus on areas where nonmanual signals would normally be used in visual–gestural language. However, after reviewing the narrative data from this study, it quickly became evident that the Deaf-Blind subjects did not utilize nonmanual signals in their signed language production.
By
Richard P. Meier, Professor of Linguistics and Psychology University of Texas at Austin,
Kearsy Cormier, Doctorate in linguistics University of Texas at Austin,
David Quinto-Pozos, Teacher Department of Linguistics at the University of Pittsburgh
At first glance, a general linguistic audience may be surprised to find a phonology section in a book that focuses on sign language research. The very word “phonology” connotes a field of study having to do with sound (phon). Sign languages, however, are obviously not made up of sounds. Instead, the phonetic building blocks of sign languages are derived from movements and postures of the hands and arms. Although early sign researchers acknowledged these obvious differences between signed and spoken languages by referring to the systematic articulatory patterns found within sign language as “cherology” (Stokoe 1960; Stokoe, Casterline, and Croneberg 1965), later researchers adopted the more widely used term “phonology” to emphasize the underlying similarity. Although different on the surface, both sign and speech are composed of minimal units that create meaningful distinctions (i.e. phonemes, in spoken languages) and these units are subject to language specific patterns (for discussions of phonological units and patterns in American Sign Language [ASL], see Stokoe 1960; Stokoe, Casterline, and Croneberg 1965; Klima and Bellugi 1979; Liddell 1984; Liddell and Johnson 1986, 1989; Wilbur 1987; Brentari 1990; Corina and Sandler 1993; Perlmutter 1993; Sandler 1993; Corina 1996; Brentari 1998).
The research reported in this set of chapters contributes to our understanding of sign phonology, and specifically to the issue of whether and how the way in which a language is produced and perceived may influence its underlying phonological structure.
Language in the visual–gestural modality presents a unique opportunity to explore fundamental structures of human language. One of the larger, more complex questions that arises when examining signed languages is the following: how, and to what degree, does the modality of a language affect the structure of that language? In this context, the term “modality” refers to the physical systems underlying the expression of a language; spoken languages are expressed in the aural-oral modality, while signed languages are expressed in the visual–gestural modality.
One apparent difference between signed languages and spoken languages relates to the linguistic expression of reference. Because they are expressed in the visual–gestural modality, signed languages are uniquely equipped to convey spatial–relational and referential relationships in a more overt manner than is possible in spoken languages. Given this apparent difference, it is not unreasonable to ask whether systems of pronominal reference in signed languages are structured according to the same principles as those governing pronominal reference in spoken languages.
Following this line of inquiry, this typological study explores the grammatical distinctions that are encoded in pronominal reference systems across spoken and signed languages. Using data from a variety of languages representing both modalities, two main questions are addressed. First, are the categories encoded within pronoun systems (e.g. person, number, gender, etc.) the same across languages in the two modalities? Second, within these categories, is the range of distinctions marked governed by similar principles?
As is well known, negation in natural languages comes in many different forms. Crosslinguistically, we observe differences concerning the morphological character of the Neg (negation) element as well as concerning its structural position within a sentence. For instance, while many languages make use of an independent Neg particle (e.g. English and German), in others, the Neg element is affixal in nature and attaches to the verb (e.g. Turkish and French). Moreover, a Neg particle may appear in sentence-initial position, preverbally, postverbally, or in sentence-final position (for comprehensive typological surveys of negation, see Dahl 1979; 1993; Payne 1985).
In this chapter I am concerned with morphosyntactic and phonological properties of sentential negation in some spoken languages as well as in German Sign Language (Deutsche Gebärdensprache or DGS) and American Sign Language (ASL). Sentential negation in DGS (as well as in other sign languages) is particularly interesting because it involves a manual and a nonmanual element, namely the manual Neg sign NICHT ‘not’ and a headshake that is associated with the predicate. Despite this peculiarity, I show that on the morphosyntactic side of the Neg construction, we do not need to refer to any modality-specific structures and principles. Rather, the same structures and principles that allow for the derivation of negated sentences in spoken languages are also capable of accounting for the sign language data.
On the phonological side, however, we do of course observe modality-specific differences; those are due to the different articulators used.
By
David P. Corina, Associate Professor of Psychology University of Washington in Seattle, WA,
Ursula C. Hildebrandt, Psychology doctoral student University of Washington in Seattle, WA
Linguistic categories (e.g. segment, syllable, etc.) have long enabled cogent descriptions of the systematic patterns apparent in spoken languages. Beginning with the seminal work of William Stokoe (1960; 1965), research on the structure of American Sign Language (ASL) has demonstrated that linguistic categories are useful in capturing extant patterns found in a signed language. For example, recognition of a syllable unit permits accounts of morphophonological processes and places constraints on sign forms (Brentari 1990; Perlmutter 1993; Sandler 1993; Corina 1996). Acknowledgment of Movement and Location segments permits descriptions of infixation processes (Liddell and Johnson 1985; Sandler 1986). Feature hierarchies provide accounts of assimilations that are observed in the language and also help to explain those that do not occur (Corina and Sandler 1993). These investigations of linguistic structure have led to a better understanding of both the similarities and differences between signed and spoken language.
Psycholinguists have long sought to understand whether the linguistic categories that are useful for describing patterns in languages are evident in the perception and production of a language. To the extent that behavioral reflexes of these theoretical constructs can be quantified, they are deemed as having a ‘psychological reality’. Psycholinguistic research has been successful in establishing empirical relationships between a subject's behavior and linguistic categories using reaction time and electrophysiological measures.
This chapter describes efforts to use psycholinguistic paradigms to explore the psychological reality of form-based representations in ASL.