To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While online learning allows learners to access materials flexibly and at their own pace, many struggle to self-regulate without supervision. Real-time interventions like pop-out quizzes, screen flashes, and text warnings aim to improve attention focus but risk distracting learners and segmenting the learning process. Despite eye-tracking technology being widely used for real-time intervention design, its potential for delayed and personalized interventions remains underexplored. To address this gap, we proposed and tested an eye-tracking-based video reconstruction and replay (EVRR) method, offering targeted review at the end of online classes without disrupting the learning process. EVRR shows significant positive effects on improving learning outcomes compared to self-paced reviews, especially for learners who are unfamiliar with the concepts.
Legal research is a repeat offender – in the best sense of the term – when it comes to making use of empirical and experimental methods borrowed from other disciplines. We anticipate that the field’s response to developments in eye-tracking research will be no different. Our aim is to aid legal researchers in the uptake of eye-tracking as a method to address questions related to cognitive processes involved in matters of law abidance, legal intervention, and the generation of new legal rules. We discuss methodological challenges of empirically studying thinking and reasoning as the mechanisms underlying behavior and introduce eye-tracking as our method of choice for obtaining high-resolution traces of visual attention. We delineate advantages and challenges of this methodological approach, and outline which concepts legal researchers can hope to measure with a toy example. We conclude by outlining some of the various research avenues in legal research for which we predict a benefit from adopting eye-tracking to their methodological toolbox.
Bilingual adults use semantic context to manage cross-language activation while reading. An open question is how lexical, contextual and individual differences simultaneously constrain this process. We used eye-tracking to investigate how 83 French–English bilinguals read L2-English sentences containing interlingual homographs (chat) and control words (pact). Between subjects, sentences biased target language or non-target language meanings (English = conversation; French = feline). Both conditions contained unbiased control sentences. We examined the impact of word- and participant-level factors (cross-language frequency and L2 age of acquisition/AoA and reading entropy, respectively). There were three key results: (1) L2 readers showed global homograph interference in late-stage reading (total reading times) when English sentence contexts biased non-target French homograph meanings; (2) interference increased as homographs’ non-target language frequency increased and L2 AoA decreased; (3) increased reading entropy globally facilitated early-stage reading (gaze durations) in the non-target language bias condition. Thus, cross-language activation during L2 reading is constrained by multiple factors.
We investigate the role of visual attention in risky choice in a rich experimental dataset that includes eye-tracking data. We first show that attention is not reducible to individual and contextual variables, which explain only 20% of attentional variation. We then decompose attentional variation into individual average attention and trial-wise deviations of attention to capture different cognitive processes. Individual average attention varies by individual, and can proxy for individual preferences or goals (as in models of “rational inattention” or goal-directed attention). Trial-wise deviations of attention vary within subjects and depend on contextual factors (as in models of “salience” or stimulus-driven attention). We find that both types of attention predict behavior: average individual attention patterns are correlated with individual levels of loss aversion and capture part of this individual heterogeneity. Adding trial-wise deviations of attention further improves model fit. Our results show that a decomposition of attention into individual average attention and trial-wise deviations of attention can capture separable cognitive components of decision making and provides a useful tool for economists and researchers from related fields interested in decision-making and attention.
In recent decades, many eye-tracking studies have demonstrated that both languages of bilingual speakers are activated while processing phonological input in only one. To date, there have been no eye-tracking co-activation studies assessing word recognition among trilinguals. The present research investigates co-activation in all three languages of 48 Russian (Heritage Language)/Hebrew (Societal Language)/English (Third Language) speakers using a trilingual visual world paradigm experiment. The results paint a picture of a highly interactive multilingual lexicon, in line with findings from prior studies on bilingualism. Although accuracy was not affected by competition conditions, reaction times and eye-fixation proportions showed slow-down and distraction in the presence of cross-linguistic competitors, albeit to different extents across the three experiments, evidencing effects of language dominance and acquisition order. This study makes considerable contributions to our understanding of the dynamics of trilingual language processing and discusses findings in the context of existing bilingual processing models.
We investigate the implications of Salience Theory for the classical preference reversal phenomenon, where monetary valuations contradict risky choices. It has been stated that one factor behind reversals is that monetary valuations of lotteries are inflated when elicited in isolation, and that they should be reduced if an alternative lottery is present and draws attention. We conducted two preregistered experiments, an online choice study () and an eye-tracking study (), in which we investigated salience and attention in preference reversals, manipulating salience through the presence or absence of an alternative lottery during evaluations. We find that the alternative lottery draws attention, and that fixations on that lottery influence the evaluation of the target lottery as predicted by Salience Theory. The effect, however, is of a modest magnitude and fails to translate into an effect on preference reversal rates in either experiment. We also use transitions (eye movements) across outcomes of different lotteries to study attention on the states of the world underlying Salience Theory, but we find no evidence that larger salience results in more transitions.
We present an interactive eye-tracking study that explores the strategic use of gaze. We analyze gaze behavior in an experiment with four simple games. The game can either be a competitive (hide & seek) game in which players want to be unpredictable, or a game of common interest in which players want to be predictable. Gaze is transmitted either in real time to another subject, or it is not transmitted and therefore non-strategic. We find that subjects are able to interpret non-strategic gaze, obtaining substantially higher payoffs than subjects who do not see gaze. If gaze is transmitted in real time, gaze becomes more informative in the common interest games and players predominantly succeed to coordinate on efficient outcomes. In contrast, gaze becomes less informative in the competitive game.
Previous experimental research suggests that individuals apply rules of thumb to a simplified mental model of the “real” decision problem. We claim that this simplification is obtained either by neglecting the other players’ incentives and beliefs or by taking them into consideration only for a subset of game outcomes. We analyze subjects’ eye movements while playing a series of two-person, 3 × 3 one-shot games in normal form. Games within each class differ by a set of descriptive features (i.e., features that can be changed without altering the game equilibrium properties). Data show that subjects on average perform partial or non-strategic analysis of the payoff matrix, often ignoring the opponent´s payoffs and rarely performing the necessary steps to detect dominance. Our analysis of eye-movements supports the hypothesis that subjects use simple decision rules such as “choose the strategy with the highest average payoff” or “choose the strategy leading to an attractive and symmetric outcome” without (optimally) incorporating knowledge on the opponent’s behavior. Lookup patterns resulted being feature and game invariant, heterogeneous across subjects, but stable within subjects. Using a cluster analysis, we find correlations between eye-movements and choices; however, applying the Cognitive Hierarchy model to our data, we show that only some of the subjects present both information search patterns and choices compatible with a specific cognitive level. We also find a series of correlations between strategic behavior and individual characteristics like risk attitude, short-term memory capacity, and mathematical and logical abilities.
Collocations, defined as sequences of frequently co-occurring words, show a processing advantage over novel word combinations in both L1 and L2 speakers. This collocation advantage is mainly observed for canonical configurations (e.g., provide information), but collocations can also occur in variation configurations (e.g., provide some of the information). Variation collocations still show a processing advantage in L1 speakers, but generally not in L2 speakers. The present eye-tracking-while-reading experiment investigated word order variation by passivising collocations (e.g., information was provided) in L1 and advanced L2 speakers of English. Altering word order did not eliminate the collocation advantage in either L1 or L2 speakers. The collocation effect was independent of contextual predictability and modulated by L2 proficiency. Results support the view that collocations are stored and retrieved via semantic representation rather than as holistic form chunks and that collocation processing does not qualitatively differ between L1 and advanced L2 speakers.
This paper reports an expansion of the English as a second language (L2) component of the Multilingual Eye Movement Corpus (MECO L2), an international database of eye movements during text reading. While the previous Wave 1 of the MECO project (Kuperman et al., 2023) contained English as a L2 reading data from readers with 12 different first language (L1) backgrounds, the newly collected dataset adds eye-tracking data on English text reading from 13 distinct L1 backgrounds (N = 660) as well as participants’ scores on component skills of English proficiency and information about their demographics and language background and use. The paper reports reliability estimates, descriptive statistics, and correlational analyses as means to validate the expansion dataset. Consistent with prior literature and the MECO Wave 1, trends in the MECO Wave 2 data include a weak correlation between reading comprehension and oculomotor measures of reading fluency and a greater L1-L2 contrast in reading fluency than reading comprehension. Jointly with Wave 1, the MECO project includes English reading data from more than 1,200 readers representing a diversity of native writing systems (logographic, abjad, abugida, and alphabetic) and 19 distinct L1 backgrounds. We provide multiple pointers to new venues of how L2 reading researchers can mine this rich publicly available dataset.
Scholars often face a choice when designing conjoint experiments: to allow for or to exclude “odd” combinations of attribute levels in the randomized conjoint profiles shown to respondents (such as a profile of a Democratic candidate who does not support abortion rights or an individual who is a medical doctor but does not have a graduate degree). While previous work has studied the statistical and theoretical implications of this decision, there has been little effort to analyze how it impacts the behavior of survey respondents. Utilizing eye-tracking, this study considers how respondents’ attention, information search behavior, and choice patterns respond to odd combinations of attributes included in conjoint profiles. We find that the impact of odd attribute-level combinations is minimal. They do not impact attention, search, or choice behavior substantially or consistently. Our conclusion is that scholars should prioritize other considerations—such as statistical, theoretical, and substantive considerations—when designing conjoint experiments.
Consumers typically overstate their intentions to purchase products, compared to actual rates of purchases, a pattern called “hypothetical bias”. In laboratory choice experiments, we measure participants’ visual attention using mousetracking or eye-tracking, while they make hypothetical as well as real purchase decisions. We find that participants spent more time looking both at price and product image prior to making a real “buy” decision than making a real “don’t buy” decision. We demonstrate that including such information about visual attention improves prediction of real buy decisions. This improvement is evident, although small in magnitude, using mousetracking data, but is not evident using eye-tracking data.
We run an eye-tracking experiment to investigate whether players change their gaze patterns and choices after they experience alternative models of choice in one-shot games. In phase 1 and 3, participants play 2 × 2 matrix games with a human counterpart; in phase 2, they apply specific decision rules while playing with a computer with known behavior. We classify participants in types based on their gaze patterns in phase 1 and explore attentional shifts in phase 3, after players were exposed to the alternative decision rules. Results show that less sophisticated players, who focus mainly on their own payoffs, change their gaze patterns towards the evaluation of others’ incentives in phase 3. This attentional shift predicts an increase in equilibrium responses in relevant classes of games. Conversely, cooperative players do not change their visual analysis. Our results shed new light on theories of bounded rationality and on theories of social preferences.
The proactive gain control hypothesis suggests that the global language context regulates lexical access to the bilinguals’ languages during reading. Specifically, with increasing exposure to non-target language cues, bilinguals adjust the lexical activation to allow non-target language access from the earliest word recognition stages. Using the invisible boundary paradigm, we examined the flow of lexical activation in 50 proficient Russian-English bilinguals reading in their native Russian while the language context shifted from a monolingual to a bilingual environment. We gradually introduced non-target language cues (the language of experimenter and fillers) while also manipulating the type of word previews (identical, code-switches, unrelated code-switches, pseudowords). The results revealed the facilitatory reading effects of code-switches but only in the later lexical processing stages and these effects were independent of the global language context manipulation. The results are discussed from the perspective of limitations imposed by script differences on bilingual language control flexibility.
This article presents a systematic review on the use of eye-tracking technology to assess the mental workload of unmanned aircraft system (UAS) operators. With the increasing use of unmanned aircraft in military and civilian operations, understanding the mental workload of these operators has become essential for ensuring mission effectiveness and safety. The review covered 26 studies that explored the application of eye-tracking to capture nuances of visual attention and assess cognitive load in real-time. Traditional methods such as self-assessment questionnaires, although useful, showed limitations in terms of accuracy and objectivity, highlighting the need for advanced approaches like eye-tracking. By analysing gaze patterns in simulated environments that reproduce real challenges, it was possible to identify moments of higher mental workload, areas of concentration and sources of distraction. The review also discussed strategies for managing mental workload, including adaptive design of human-machine interfaces. The analysis of the studies revealed a growing relevance and acceptance of eye-tracking as a diagnostic and analytical tool, offering guidelines for the development of interfaces and training that dynamically respond to the cognitive needs of operators. It was concluded that eye-tracking technology can significantly contribute to the optimisation of UAS operations, enhancing both the safety and efficiency of military and civilian missions.
The scarce literature on the processing of internally headed relative clauses (IHRCs) seems to challenge the universality of the subject advantage (e.g., Lau & Tanaka [2021, Glossa: A Journal of General Linguistics, 6(1), 34], for spoken languages; Hauser et al. [2021, Glossa: A Journal of General Linguistics, 6(1), 72], for sign languages). In this study, we investigate the comprehension of subject and object IHRCs in Italian Sign Language (LIS) deaf native and non-native signers, and hearing LIS/Italian CODAs (children of deaf adults). We use the eye-tracking Visual-only World Paradigm (Hauser & Pozniak [2019, Poster presented at the AMLAP 2019 conference]) recording online and offline responses. Results show that a subject advantage is detected in the online and offline responses of CODAs and in the offline responses of deaf native signers. Results also reveal a higher rate of accuracy in CODAs' responses. We discuss the difference in performance between the two populations in the light of bilingualism-related cognitive advantages, and lack of proper educational training in Italian and LIS for the deaf population in Italy.
Identifying the absence of situation awareness (SA) in air traffic controllers is critical since it directly affects their hazard perception. This study aims to introduce and validate a multimodal methodology employing electroencephalogram (EEG) and eye-tracking to investigate SA variation within specific air traffic control contexts. Data from 28 participants executing the experiment involving three different SA-probe tests illustrated the conceptual relationship between EEG and eye-tracking indicators and SA variations, using behavioural data as a proxy. The results indicated that both EEG and eye-tracking metrics correlated positively with the SA levels required, that is, the frequency spectrum in the β (13–30 Hz) and γ (30–50 Hz) bands, alongside the fixation/saccade-based indicators and pupil dilation increased in response to higher SA levels. This research has substantial implications for investigating SA using a human-centric approach via psychophysiological indicators, revealing the intrinsic interactions between the human capability envelope and SA, contributing to the development of a real-time monitoring system of SA variations for air transportation safety research.
Two major trends on children’s skills to comprehend metaphors have governed the literature on the subject: the literal stage hypothesis vs. the early birds hypothesis (Falkum, 2022). We aim to contribute to this debate by testing children’s capability to comprehend novel metaphors (‘X is a Y’) in Spanish with a child-friendly, picture selection task, while also tracking their gaze. Further, given recent findings on the development of metonymy comprehension suggesting a U-shaped developmental curve for this phenomenon (Köder & Falkum, 2020), we aimed to determine the shape of the developmental trajectory of novel metaphor comprehension, and to explore how both types of data (picture selection and gaze behavior) relate to each other. Our results suggest a linear developmental trajectory with 6-year-olds significantly succeeding in picture selection and consistently looking at the metaphorical target even after question onset.
We assess the feasibility of conducting web-based eye-tracking experiments with children using two methods of webcam-based eye-tracking: automatic gaze estimation with the WebGazer.js algorithm and hand annotation of gaze direction from recorded webcam videos. Experiment 1 directly compares the two methods in a visual-world language task with five to six year-old children. Experiment 2 more precisely investigates WebGazer.js’ spatiotemporal resolution with four to twelve year-old children in a visual-fixation task. We find that it is possible to conduct web-based eye-tracking experiments with children in both supervised (Experiment 1) and unsupervised (Experiment 2) settings – however, the webcam eye-tracking methods differ in their sensitivity and accuracy. Webcam video annotation is well-suited to detecting fine-grained looking effects relevant to child language researchers. In contrast, WebGazer.js gaze estimates appear noisier and less temporally precise. We discuss the advantages and disadvantages of each method and provide recommendations for researchers conducting child eye-tracking studies online.
Bilinguals activate both of their languages as they process written words, regardless of modality (spoken or signed); these effects have primarily been documented in single word reading paradigms. We used eye-tracking to determine whether deaf bilingual readers (n = 23) activate American Sign Language (ASL) translations as they read English sentences. Sentences contained a target word and one of the two possible prime words: a related prime which shared phonological parameters (location, handshape or movement) with the target when translated into ASL or an unrelated prime. The results revealed that first fixation durations and gaze durations (early processing measures) were shorter when target words were preceded by ASL-related primes, but prime condition did not impact later processing measures (e.g., regressions). Further, less-skilled readers showed a larger ASL co-activation effect. Together, the results indicate that ASL co-activation impacts early lexical access and can facilitate reading, particularly for less-skilled deaf readers.