We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consumers typically overstate their intentions to purchase products, compared to actual rates of purchases, a pattern called “hypothetical bias”. In laboratory choice experiments, we measure participants’ visual attention using mousetracking or eye-tracking, while they make hypothetical as well as real purchase decisions. We find that participants spent more time looking both at price and product image prior to making a real “buy” decision than making a real “don’t buy” decision. We demonstrate that including such information about visual attention improves prediction of real buy decisions. This improvement is evident, although small in magnitude, using mousetracking data, but is not evident using eye-tracking data.
We run an eye-tracking experiment to investigate whether players change their gaze patterns and choices after they experience alternative models of choice in one-shot games. In phase 1 and 3, participants play 2 × 2 matrix games with a human counterpart; in phase 2, they apply specific decision rules while playing with a computer with known behavior. We classify participants in types based on their gaze patterns in phase 1 and explore attentional shifts in phase 3, after players were exposed to the alternative decision rules. Results show that less sophisticated players, who focus mainly on their own payoffs, change their gaze patterns towards the evaluation of others’ incentives in phase 3. This attentional shift predicts an increase in equilibrium responses in relevant classes of games. Conversely, cooperative players do not change their visual analysis. Our results shed new light on theories of bounded rationality and on theories of social preferences.
The proactive gain control hypothesis suggests that the global language context regulates lexical access to the bilinguals’ languages during reading. Specifically, with increasing exposure to non-target language cues, bilinguals adjust the lexical activation to allow non-target language access from the earliest word recognition stages. Using the invisible boundary paradigm, we examined the flow of lexical activation in 50 proficient Russian-English bilinguals reading in their native Russian while the language context shifted from a monolingual to a bilingual environment. We gradually introduced non-target language cues (the language of experimenter and fillers) while also manipulating the type of word previews (identical, code-switches, unrelated code-switches, pseudowords). The results revealed the facilitatory reading effects of code-switches but only in the later lexical processing stages and these effects were independent of the global language context manipulation. The results are discussed from the perspective of limitations imposed by script differences on bilingual language control flexibility.
This article presents a systematic review on the use of eye-tracking technology to assess the mental workload of unmanned aircraft system (UAS) operators. With the increasing use of unmanned aircraft in military and civilian operations, understanding the mental workload of these operators has become essential for ensuring mission effectiveness and safety. The review covered 26 studies that explored the application of eye-tracking to capture nuances of visual attention and assess cognitive load in real-time. Traditional methods such as self-assessment questionnaires, although useful, showed limitations in terms of accuracy and objectivity, highlighting the need for advanced approaches like eye-tracking. By analysing gaze patterns in simulated environments that reproduce real challenges, it was possible to identify moments of higher mental workload, areas of concentration and sources of distraction. The review also discussed strategies for managing mental workload, including adaptive design of human-machine interfaces. The analysis of the studies revealed a growing relevance and acceptance of eye-tracking as a diagnostic and analytical tool, offering guidelines for the development of interfaces and training that dynamically respond to the cognitive needs of operators. It was concluded that eye-tracking technology can significantly contribute to the optimisation of UAS operations, enhancing both the safety and efficiency of military and civilian missions.
The scarce literature on the processing of internally headed relative clauses (IHRCs) seems to challenge the universality of the subject advantage (e.g., Lau & Tanaka [2021, Glossa: A Journal of General Linguistics, 6(1), 34], for spoken languages; Hauser et al. [2021, Glossa: A Journal of General Linguistics, 6(1), 72], for sign languages). In this study, we investigate the comprehension of subject and object IHRCs in Italian Sign Language (LIS) deaf native and non-native signers, and hearing LIS/Italian CODAs (children of deaf adults). We use the eye-tracking Visual-only World Paradigm (Hauser & Pozniak [2019, Poster presented at the AMLAP 2019 conference]) recording online and offline responses. Results show that a subject advantage is detected in the online and offline responses of CODAs and in the offline responses of deaf native signers. Results also reveal a higher rate of accuracy in CODAs' responses. We discuss the difference in performance between the two populations in the light of bilingualism-related cognitive advantages, and lack of proper educational training in Italian and LIS for the deaf population in Italy.
Two major trends on children’s skills to comprehend metaphors have governed the literature on the subject: the literal stage hypothesis vs. the early birds hypothesis (Falkum, 2022). We aim to contribute to this debate by testing children’s capability to comprehend novel metaphors (‘X is a Y’) in Spanish with a child-friendly, picture selection task, while also tracking their gaze. Further, given recent findings on the development of metonymy comprehension suggesting a U-shaped developmental curve for this phenomenon (Köder & Falkum, 2020), we aimed to determine the shape of the developmental trajectory of novel metaphor comprehension, and to explore how both types of data (picture selection and gaze behavior) relate to each other. Our results suggest a linear developmental trajectory with 6-year-olds significantly succeeding in picture selection and consistently looking at the metaphorical target even after question onset.
We assess the feasibility of conducting web-based eye-tracking experiments with children using two methods of webcam-based eye-tracking: automatic gaze estimation with the WebGazer.js algorithm and hand annotation of gaze direction from recorded webcam videos. Experiment 1 directly compares the two methods in a visual-world language task with five to six year-old children. Experiment 2 more precisely investigates WebGazer.js’ spatiotemporal resolution with four to twelve year-old children in a visual-fixation task. We find that it is possible to conduct web-based eye-tracking experiments with children in both supervised (Experiment 1) and unsupervised (Experiment 2) settings – however, the webcam eye-tracking methods differ in their sensitivity and accuracy. Webcam video annotation is well-suited to detecting fine-grained looking effects relevant to child language researchers. In contrast, WebGazer.js gaze estimates appear noisier and less temporally precise. We discuss the advantages and disadvantages of each method and provide recommendations for researchers conducting child eye-tracking studies online.
Bilinguals activate both of their languages as they process written words, regardless of modality (spoken or signed); these effects have primarily been documented in single word reading paradigms. We used eye-tracking to determine whether deaf bilingual readers (n = 23) activate American Sign Language (ASL) translations as they read English sentences. Sentences contained a target word and one of the two possible prime words: a related prime which shared phonological parameters (location, handshape or movement) with the target when translated into ASL or an unrelated prime. The results revealed that first fixation durations and gaze durations (early processing measures) were shorter when target words were preceded by ASL-related primes, but prime condition did not impact later processing measures (e.g., regressions). Further, less-skilled readers showed a larger ASL co-activation effect. Together, the results indicate that ASL co-activation impacts early lexical access and can facilitate reading, particularly for less-skilled deaf readers.
We investigated the retention of surface linguistic information during reading using eye-tracking. Departing from a research tradition that examines differences between meaning retention and verbatim memory, we focused on how different linguistic factors affect the retention of surface linguistic information. We examined three grammatical alternations in German that differed in involvement of changes in morpho-syntax and/or information structure, while their propositional meaning is unaffected: voice (active vs. passive), adverb positioning, different realizations of conditional clauses. Single sentences were presented and repeated, either identical or modified according to the grammatical alternation (with controlled interval between them). Results for native (N = 60) and non-native (N = 58) German participants show longer fixation durations for modified versus unmodified sentences when information structural changes are involved (voice, adverb position). In contrast, mere surface grammatical changes without a functional component (conditional clauses) did not lead to different reading behavior. Sensitivity to the manipulation was not influenced by language (L1, L2) or repetition interval. The study provides novel evidence that linguistic factors affect verbatim retention and highlights the importance of eye-tracking as a sensitive measure of implicit memory.
Social hierarchical information impacts language comprehension. Nevertheless, the specific process underlying the integration of linguistic and extralinguistic sources of social hierarchical information has not been identified. For example, the Chinese social hierarchical verb 赡养, /shan4yang3/, ‘support: provide for the needs and comfort of one’s elders’, only allows its Agent to have a lower social status than the Patient. Using eye-tracking, we examined the precise time course of the integration of these semantic selectional restrictions of Chinese social hierarchical verbs and extralinguistic social hierarchical information during natural reading. A 2 (Verb Type: hierarchical vs. non-hierarchical) × 2 (Social Hierarchy Sequence: match vs. mismatch) design was constructed to investigate the effect of the interaction on early and late eye-tracking measures. Thirty-two participants (15 males; age range: 18–24 years) read sentences and judged the plausibility of each sentence. The results showed that violations of semantic selectional restrictions of Chinese social hierarchical verbs induced shorter first fixation duration but longer regression path duration and longer total reading time on sentence-final nouns (NP2). These differences were absent under non-hierarchical conditions. The results suggest that a mismatch between linguistic and extralinguistic social hierarchical information is immediately detected and processed.
The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.
This study used the visual world paradigm to investigate novel word learning in adults from different language backgrounds and the effects of phonology, homophony, and rest on the outcome. We created Mandarin novel words varied by types of phonological contrasts and homophone status. During the experiment, native (n = 34) and non-native speakers (English; n = 30) learned pairs of novel words and were tested twice with a 15-minute break in between, which was spent either resting or gaming. In the post-break test of novel word recognition, an interaction appeared between language backgrounds, phonology, and homophony: non-native speakers performed less accurately than native speakers only on non-homophones learned in pairs with tone contrasts. Eye movement data indicated that non-native speakers’ processing of tones may be more effortful than their processing of segments while learning homophones, as demonstrated by the time course. Interestingly, no significant effects of rest were observed across language groups; yet after gaming, native speakers achieved higher accuracy than non-native speakers. Overall, this study suggests that Mandarin novel word learning can be affected by participants’ language backgrounds and phonological and homophonous features of words. However, the role of short periods of rest in novel word learning requires further investigation.
In this chapter, we discuss the way people read, remember and understand discourse, depending on the type of relations that link discourse segments together. We also illustrate the role of connectives and other discourse signals as elements guiding readers’ interpretation. Throughout the chapter, we review empirical evidence from experiments that involve various methodologies such as offline comprehension tasks, self-paced reading, eye-tracking and event related potentials. One of the major findings is that not all relations are processed and remembered in the same way. It seems that causal relations play a special role for creating coherence in discourse, as they are processed more quickly and remembered better. Conversely, because they are highly expected, causal relations benefit less from the presence of connectives compared to discontinuous relations like concession and confirmation. Finally, research shows that in their native language, speakers are able to take advantage of all sorts of connectives for discourse processing, even those restricted to the written mode, and those that are ambiguous.
This chapter offers a thorough guide to the techniques and instruments used to understand how the brain develops in humans. It covers key learning goals, such as examining how behaviors change as people grow, how studying typical and atypical development inform each other, and what we can and cant learn about brain structure using non-invasive brain scans. It also explains the two main ways we measure brain function. Starting with some back history on methodological tools, this chapter sets the stage for deeper insights into brain development and its impact on our abilities. It highlights the dynamic nature of the field, influenced by both animal studies and rapidly evolving and improving analytical tools and methods. With a focus on methods for studying children, we explore more advanced techniques used in different age groups. Furthermore, this chapter stresses the importance of a scientific mindset and adaptability when new evidence comes to light. It serves as a vital reference for understanding the tools and approaches in developmental cognitive neuroscience.
According to Talmy, in verb-framed languages (e.g., French), the core schema of an event (Path) is lexicalized, leaving the co-event (Manner) in the periphery of the sentence or optional; in satellite-framed languages (e.g., English), the core schema is jointly expressed with the co-event in construals that lexicalize Manner and express Path peripherally. Some studies suggest that such differences are only surface differences that cannot influence the cognitive processing of events, while others support that they can constrain both verbal and non-verbal processing. This study investigates whether such typological differences, together with other factors, influence visual processing and decision-making. English and French participants were tested in three eye-tracking tasks involving varied Manner–Path configurations and language to different degrees. Participants had to process a target motion event and choose the variant that looked most like the target (non-verbal categorization), then describe the events (production), and perform a similarity judgment after hearing a target sentence (verbal categorization). The results show massive cross-linguistic differences in production and additional partial language effects in visualization and similarity judgment patterns – highly dependent on the salience and nature of events and the degree of language involvement. The findings support a non-modular approach to language–thought relations and a fine-grained vision of the classic lexicalization/conflation theory.
Combining adjective meaning with the modified noun is particularly challenging for children under three years. Previous research suggests that in processing noun-adjective phrases children may over-rely on noun information, delaying or omitting adjective interpretation. However, the question of whether this difficulty is modulated by semantic differences among (subsective) adjectives is underinvestigated.
A visual-world experiment explores how Italian-learning children (N=38, 2;4–5;3) process noun-adjective phrases and whether their processing strategies adapt based on the adjective class. Our investigation substantiates the proficient integration of noun and adjective semantics by children. Nevertheless, alligning with previous research, a notable asymmetry is evident in the interpretation of nouns and adjectives, the latter being integrated more slowly. Remarkably, by testing toddlers across a wide age range, we observe a developmental trajectory in processing, supporting a continuity approach to children’s development. Moreover, we reveal that children exhibit sensitivity to the distinct interpretations associated with each subsective adjective.
While the Talmian dichotomy between satellite-framed and verb-framed languages has been amply studied for motion events, it has been less discussed for locative events, even if Talmy considers these to be included in motion events. This paper discusses such locative events, starting from the significant cross-linguistic variation among Dutch, French, and English. Dutch habitually encodes location via cardinal posture verbs (CPVs; ‘SIT’, ‘LIE’, ‘STAND’) expressing the orientation of the Figure, French prefers orientation-neutral existence verbs like être ‘be’ and English – unlike for motion events – straddles the middle with a marked preference for be but the possibility to occasionally rely on CPVs. Through the analysis of recognition performances and gazing behaviours in a non-verbal recognition task, this study confirms a (subtle) cognitive impact of different linguistic preferences on the mental representation of locative events. More specifically, they confirm the continuum suggested by Lemmens (2005, Parcours linguistiques. Domaine anglais (pp. 223–244). Publications de l’Université St Etienne.) for the domain of location with French on the one extreme and Dutch on the other with English in-between, behaving like French in some contexts but like Dutch in others.
This chapter describes how people read and interpret ironical language. Tracking people’s rapid eye movements as they read can be an informative measure of the underling cognitive and linguistic processes operating during online written language comprehension. Attardo introduces some of the technologies employed in measuring eye movements during reading and suggests why these assessments can provide critical insights into how irony interpretation rapidly unfolds word-by-word as one reads. He reviews various experimental studies on irony and sarcasm understanding that provide explicit empirical tests of different theories of irony (e.g., multistate models, graded salience, parallel-constraint models predictive processing models). He also explores what the study of eye tracking reveals about the influence of contextual factors and individual differences in irony interpretation, as well as the phenomenon known as “gaze aversion” when listeners momentarily look away from speakers’ faces when hearing ironic language. Attardo closes his chapter with an important discussion of the sometimes contentious relations between psycholinguistic experiments and philosophical arguments on the ways people use and interpret irony in discourse.
Using the visual world paradigm with printed words, this study investigated the flexibility and representational nature of phonological prediction in real-time speech processing. Native speakers of Mandarin Chinese listened to spoken sentences containing highly predictable target words and viewed a visual array with a critical word and a distractor word on the screen. The critical word was manipulated in four ways: a highly predictable target word, a homophone competitor, a tonal competitor, or an unrelated word. Participants showed a preference for fixating on the homophone competitors before hearing the highly predictable target word. The predicted phonological information waned shortly but was re-activated later around the acoustic onset of the target word. Importantly, this homophone bias was observed only when participants were completing a ‘pronunciation judgement’ task, but not when they were completing a ‘word judgement’ task. No effect was found for the tonal competitors. The task modulation effect, combined with the temporal pattern of phonological pre-activation, indicates that phonological prediction can be flexibly generated by top-down mechanisms. The lack of tonal competitor effect suggests that phonological features such as lexical tone are not independently predicted for anticipatory speech processing.
The present study asked whether oral vocabulary training can facilitate reading in a second language (L2). Fifty L2 speakers of English received oral training over three days on complex novel words, with predictable and unpredictable spellings, composed of novel stems and existing suffixes (i.e., vishing, vishes, vished). After training, participants read the novel word stems for the first time (i.e., trained and untrained), embedded in sentences, and their eye movements were monitored. The eye-tracking data revealed shorter looking times for trained than untrained stems, and for stems with predictable than unpredictable spellings. In contrast to monolingual speakers of English, the interaction between training and spelling predictability was not significant, suggesting that L2 speakers did not generate orthographic skeletons that were robust enough to affect their eye-movement behaviour when seeing the trained novel words for the first time in print.