To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter presents and discusses empirical data on the neuropsychology of gesture production. The focus of this chapter is on the specific contributions of the right and left hemispheres to the generation of gestures. Since the respective neuroscientific method has a substantial impact on the study results and different methodologies can even entail apparently opposing results concerning gesture production, different neuropsychological methods, their paradigms, and limitations are presented in detail. Spontaneous gesture production studies evidence a substantial contribution of the right hemisphere to gesture production, while gesture production on command studies show a relevant role of the left hemisphere. Gestures that are generated in association with right hemispheric functions such as spatial cognition, nonverbal emotional expression, global and metaphorical thinking appear to be generated in the right hemisphere, while gestures that are linked to tool use praxis are generated in the left hemisphere. The findings further provide a neuropsychological basis for understanding the complementarity but also the dissociation between gestural and verbal message.
This chapter concerns the use of manual gestures in human–computer interaction (HCI) and user experience research (UX research). Our goal is to empower gesture researchers to conduct meaningful research in these fields. We therefore give special focus to the similarities and differences between HCI research, UX research, and gesture studies when it comes to theoretical framework, relevant research questions, empirical methods, and use cases, i.e. the contexts in which gesture control can be used. As part of this, we touch on the role of various gesture-detecting technologies in conducting this kind of research. The chapter ends with our suggestions for the opportunities gesture researchers have to extend this body of knowledge and add value to the implementation and instantiation of systems with gesture control.
Emblematic gestures (or emblems) have several denominations in the literature (for instance autonomous, quotable, semiotic, folkloric or symbolic gestures). Emblems are culture-bound gestures; they differ interculturally and intraculturally, both among different cultural and linguistic areas, and among individuals and social groups within the same culture. These gestures are easily translated into verbal language, they are quotable; they are equivalent to utterances, and in many cases, they have names. Typical emblems are used – alongside or without words – for greetings, insults or mockery, to indicate places or people (deictics), to refer to the state of a person (to be drunk, to be asleep …), to give interpersonal orders or to represent actions (to eat, to drink, etc.). Many emblems show a clear perlocutionary component (to offer, to threaten, to promise or to swear …). The tradition in the study of emblems has always emphasized their autonomy from speech (they are interpretable with a high level of context independence). Moreover, emblematic capacity can be regarded as associated with illocutionary force, which is one of the most characteristic features of these units.
Iconic aspects of postures and hand movements have long been a central issue in gesture research. A speaker’s body may become a dynamic, viewpointed ‘icon’ (Peirce 1960) of someone or something else, or hands may create iconic signs. Recent research on iconicity in spoken and signed languages has (re)established its constitutive role in language (e.g. Jakobson 1990) and more broadly in multimodal interaction, which naturally includes iconic manual gestures and full-body enactments. Peircean semiotics are combined with cognitive linguistic accounts to demonstrate the role of iconicity in embodied conceptual and linguistic structures and to account for modality-specific manifestations of iconicity in gesture. We provide an overview of gestural modes of representation and techniques of depiction and exemplify the ways in which iconicity interacts with other semiotic principles, such as indexicality, viewpoint, and metonymy. The chapter also highlights empirical research into gestural iconicity as it relates to language acquisition, development, and processing, language and cognition, and the fields of computation and robotics.
A growth point captures the moment of speaking, taking a first-person view. It is thought in language, imbued with mental/social energy, and unpacked into a sentence. It is not a translation of gesture into speech. It is a process of processes. One is the psychological predicate (a notion from Vygotsky), a differentiation of context for what is newsworthy, the growth point’s core meaning – the context reshaped into a field of equivalents to make the differentiation meaningful. The core meaning has dual semiosis – opposite semiotic modes – a global-synthetic gesture and analytics-segmented speech, synchronized and coexpressive of the core. The gesture phases foster the synchronization. Cohesive threads to other growth points (a “catchment”) enrich it. A dialectic provides the growth point’s unpacking – the gesture becoming the thesis, the coexpressive speech the antithesis. Jointly, they create the dialectic synthesis. The dialectic synthesis and the unpacking are the same summoned construction-plus-gesture. The growth point, its processes fulfilled, inhabits the speaker’s being, taking up a position in the world of meaning it has created (conception from Merleau-Ponty).
The chapter provides an overview of Kendon’s research biography, describing the origins of the theoretical notions and categories for analysis that he developed, e.g. gesture unit, gesture phrase, preparation, stroke, hold, kinesic action, the ways in which gestures can perform referential (through forms of pointing and depiction) and pragmatic functions (including operational, performative, modal, and parsing functions). The data he considered included not only speakers’ gestures, but also signed languages of different types, e.g. those used by the Deaf (primary sign languages), to those used for ritualistic or professional reasons (alternate sign languages). Discussion of the latter notes their structural relation to the spoken languages of their users. Locations and communities in which Kendon studied visible action as utterance include Great Britian, Naples, Italy; Papua New Guinea, and the United States; and among Aboriginal people in Australia. The work finishes with issues related to the study of language origins. Emphasis is placed throughout on the limitations of the term ‘gesture’ and the author’s preference for other terms, such as ‘utterance dedicated visible action’.
First, how does the human cognitive system give rise to gestures? A growing body of literature suggests that gestures are based in people’s perceptual and physical experience of the world. Second, do gestures influence how people take in information from the world? Research suggests that producing gestures modifies producers’ experience of the world in specific ways. Third, does externalizing information in gestures affect cognitive processing? There is evidence that expressing spatial and motoric information in gestures has consequences for thinking, including for memory and problem solving. Fourth, how do gestures influence other people’s cognitive processing? Research indicates that gestures can highlight certain forms of information for others’ thinking, thus engaging social mechanisms that influence cognitive processing. Gestures are closely tied to action, and they reveal how producers schematize information in the objects, tasks, events, and situations that they gesture about. In brief, gestures play an integral role in cognition, both for gesture producers and for gesture recipients, because they are actions of the body that bridge the mind and the world.
Gesture is a powerful tool for learning. Gestures reflect a learner’s knowledge and also have the power to change that knowledge. But how early does this ability develop and how might it change over time? Here we discuss the effects of gesture on learning, taking a developmental perspective. We compare how young learners benefit from gesture prior to developing full language skills, as well as how gesture and language work together to support instruction in older children. For both developmental stages, we explore three ways in which gesture can influence learning: (1) by indexing or reflecting a learner’s knowledge, (2) by changing that knowledge through the gestures that learners themselves produce, and (3) by changing that knowledge through the gestures that learners see. Taken together, the evidence suggests that gesture plays a powerful role in learning and education throughout development.
Gestures of the face have a relatively limited presence in scholarly gesture discourses. The use of facial movements as intentional communication has been historically undermined in facial behavior research. The face has been primarily studied as expressions of emotion, traditionally theorized as involuntary signs of internal affective states. Emotion expressions are differentiated from facial movements that serve conversational functions in face-to-face dialogue. The facial gestures presented in this chapter illustrate the flexibility and diversity of meanings conveyed by facial communicative actions. Gestures can refer to affective events not present in the immediate here and now, communicate understanding of another individual’s affective experience, and convey information about a target referent. Other facial gestures have counterparts in hand gestures with similar pragmatic and semantic functions. The study of facial gestural components of linguistic communicative events is important to the construction of a comprehensive model of language.
The chapters in the handbook cover five main topics. Gesture types in terms of forms and functions; the focus is on manual gestures and their use as emblems, recurrent gestures, pointing gestures, and iconic representational gestures, but attention is also given to facial gestures. Different methods by which gestures have been annotated and analyzed, and different theoretical and methodological approaches, including semiotic analysis. The relation of gesture to language use covers language evolution as well as first and second language acquisition. Gestures in relation to cognition, including an overview of McNeill’s growth point theory. Gestures in interaction, considering variation in gesture use and intersubjectivity. Across the chapters, the meaning of the term ‘gesture’ is itself debated, as is the relation of gesture to language (as multimodal communication or in terms of different semiotic systems). Gesture use is studied based on data from speakers of various languages and cultures, but there is a bias toward European cultures, which remains to be addressed. The handbook provides overviews of the work of some scholars which was previously not widely available in English.
The chapter considers gesture studies in relation to corpus linguistic work. The focus is on the Multimedia Russian Corpus (MURCO), part of the Russian National Corpus. The chapter includes a brief biography of the creator of this corpus, Elena Grishina. The compilation of the corpus out of a set of Russian classic feature films and recorded lectures is described as well as the methods of annotating it in detail. The gesture coding is not limited to manual/hand gestures, but also includes head gestures and use of eye gaze. The chapter considers the findings from the corpus, and reported in Grishina’s posthumously published volume on Russian gestures from a linguistic point of view. The categories include pointing gestures, representational gestures, auxiliary (discourse-structuring) gestures, and several cross-cutting categories, including gestures in relation to pragmatics and to grammatical categories, like verbal aspect. Additional consideration is given to other video corpora in English (and other languages) which are being used for gesture research, namely the UCLA NewsScape library being managed by the Red Hen Lab and the Television Archive.
Proposals that gesture played a pivotal role in the evolution of language have been highly influential. However, there are many differences between gestural origin theories, including different definitions of ‘gesture’ itself. We use a cognitive semiotic approach in order to categorize and review these theories. A semiotic system is a combination of signs or signals of particular type, defined by characteristic properties, and the interrelations between these signs/signals. Signal systems like spontaneous facial expressions and non-linguistic vocalizations are under less voluntary control than sign systems. The basic distinction relates to the question of whether gesture played an exclusive role in early stages of language evolution (monosemiotic theories), or whether other semiotic systems were involved as well: polysemiotic theories. The latter may be equipollent, where language and gesture are considered equally prominent from the onset, or pantomimic, where gesture played the main but not exclusive role in breaking from predominantly signal-based to sign-based communication. We conclude that pantomimic theories are the most promising kind.
Geneviève Calbris’ semiotic study of French gestures began in the 1970s and shows how gestural signs interface between the concrete and the abstract. Created by analogical links originating in physical experience of the world via processes of mimesis and metonymy, they are activated by contexts of use and constitute diverse semantic constructions: Gesture is able to evoke several notions alternatively (polysemy) or simultaneously (polysign). As expressions of perceptual schemas extracted from physical experience, they prefigure concepts. A Saussurean perspective brings to light relations between physical features of gestures (signifiers) and the notions (signifieds) they are apt to evoke; it reveals signifiers that are common to different gestures (paradigmatic axis of substitution) and how signifiers interweave in gestural sequencing (syntagmatic axis of combination). Gesture expresses, animates, explains, synthesizes information, and anticipates speech. We highlight its utterance functions, its simultaneous multireferentiality, the gestural anticipation of verbal information, and the interplay of tension-relaxation between conversation partners that this can create.
Face-to-face dialogue and the cospeech gestures that occur within it are social as well as cognitive. Cospeech gestures are microsocial. Some of these gestures provide information that directly advances the topic of a dialogue. Others inform addressee about the state of the dialogue at that moment. Efron distinguished between objective and logical-discursive gestures. McNeill distinguished between propositional gestures and beats. Bavelas et al. distinguished between topical and interactive functions of gestures. Gerwing documented changes in form that marked gestures that were part of common ground. Seyfeddinipur identified discursive and meta-discursive gestures. Kendon described a variety of social, pragmatic gestures. Holler and Wilkin showed that mimicking a gesture conveyed understanding of that gesture. Galati and Brennan noted metanarrative functions of gestures. Kok et al. separated semantic and metacommunicative gestures. We focus on Clark’s distinction between track 1 and track 2 functions in dialogue. Track 1 conveys the basic communicative acts (i.e. topical content) whereas track 2 conveys the metacommunicative acts that ensure successful communication.
This chapter presents an overview of the field of second/foreign language acquisition (SLA) and gesture, which examines gestures as a window onto language acquisition, and gestures as a medium of acquisition. The chapter surveys what is known about effects of specific languages in contact (crosslinguistic influence), general learner behaviors, teachers’ and learners’ gesture practices in and outside of language classrooms, and effects of seeing and producing gestures on language learning. The chapter closes with a research agenda for SLA and gesture studies, outlining some open questions, challenges, and future research topics.