To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The English past tense contains pockets of variation, where regular and irregular forms compete (e.g. learned/learnt, weaved/wove). Individuals vary considerably in the degree to which they prefer irregular forms. This article examines the degree to which individuals may converge on their regularization patterns and preferences. We report on a novel experimental methodology, using a cooperative game involving nonce verbs. Analysis of participants' postgame responses indicates that their behavior shifted in response to an automated co-player's preferences, on two dimensions. First, players regularize more after playing with peers with high regularization rates, and less after playing with peers with low regularization rates. Second, players' overall patterns of regularization are also affected by the particular distribution of (ir)regular forms produced by the peer.
We model the effects of the exposure on participants' morphological preferences, using both a rule-based model and an instance-based analogical model (Nosofsky 1988, Albright & Hayes 2003). Both models contribute separately and significantly to explaining participants' pre-exposure regularization processes. However, only the instance-based model captures the shift in preferences that arises after exposure to the peer. We argue that the results suggest an account of morphological convergence in which new word forms are stored in memory, and on-line generalizations are formed over these instances.
Sequences such as [mb, kp, ts] pattern as complex segments in some languages but as clusters of simple consonants in others. What evidence is used to learn their language-specific status? We present an implemented computational model that starts with simple consonants and builds more complex representations by tracking statistical distributions of consonant sequences. This strategy succeeds in a wide range of cases, both in languages that supply clear phonotactic arguments for complex segments and in languages where the evidence is less clear. We then turn to the typological parallels between complex segments and consonant clusters: both tend to be limited in size and composition. We suggest that our approach allows the parallels to be reconciled. Finally, we compare our model with alternatives: learning complex segments from phonotactics and from phonetics.
To successfully learn language—and more specifically how to use verbs correctly—children must solve the linking problem: they must learn the mapping between the thematic roles specified by a verb's lexical semantics and the syntactic argument positions specified by a verb's syntactic frame. We use an empirically grounded and integrated quantitative framework involving corpus analysis, experimental meta-analysis, and computational modeling to implement minimally distinct versions of mapping approaches that (i) either are specified a priori or develop during language acquisition, and (ii) rely on either an absolute or a relative thematic role system. Using successful verb class learning as an evaluation metric, we embed each approach within a concrete model of the acquisition process and see which learning assumptions are able to match children's verb-learning behavior at three, four, and five years old. Our current results support a trajectory where children (i) may not have prior expectations about linking patterns between ages three and five, and (ii) begin with a relative thematic system, progressing toward optionality between a relative and an absolute system. We discuss implications of our results for both theories of syntactic representation and theories of how those representations are acquired. We also discuss the broader contribution of this study as a concrete modeling framework that can be updated with new linking theories, corpora, and experimental results.
Language learners are often faced with a scenario where the data allow multiple generalizations, even though only one is actually correct. One promising solution to this problem is that children are equipped with helpful learning strategies that guide the types of generalizations made from the data. Two successful approaches in recent work for identifying these strategies have involved (i) expanding the set of informative data to include INDIRECT POSITIVE EVIDENCE, and (ii) using observable behavior as a target state for learning. We apply both of these ideas to the case study of English anaphoric one, using computationally modeled learners that assume one’s antecedent is the same syntactic category as one and form their generalizations based on realistic data. We demonstrate that a learner that is biased to include indirect positive evidence coming from other pronouns in English can generate eighteen-month-old looking-preference behavior. Interestingly, we find that the knowledge state responsible for this target behavior is a context- dependent representation for anaphoric one, rather than the adult representation, but this immature representation can suffice in many communicative contexts involving anaphoric one. More generally, these results suggest that children may be leveraging broader sets of data to make the syntactic generalizations leading to their observed behavior, rather than selectively restricting their input. We additionally discuss the components of the learning strategies capable of producing the observed behavior, including their possible origin and whether they may be useful for making other linguistic generalizations.
Computational probabilistic modeling is increasingly popular in linguistics, but its relationship with linguistic theory is ambivalent. We argue here for the potential benefit of theory-driven statistical modeling, based on a case study situated at the semantics-pragmatics interface. Using data from a novel experiment, we employ Bayesian model comparison to evaluate the predictive adequacy of four models that differ in the extent to and manner in which grammatically generated candidate readings are taken into account in four probabilistic pragmatic models of utterance and interpretation choice. The data provide strong evidence for the idea that the full range of potential readings made available by recently popular grammatical approaches to scalar-implicature computation might be needed, and that classical Gricean reasoning may help manage the manifold ambiguity introduced by grammatical approaches to these. The case study thereby shows a way of bridging linguistic theory and empirical data with the help of probabilistic pragmatic modeling as a linking function.
Ambridge, Pine, and Lieven could have provided a stronger argument for their conclusion, which postulated that innate universal-grammar-specified knowledge does not simplify the language learning task, had they not paid so much attention to the Chomskyan paradigm. I argue that poverty-of-the-stimulus arguments do not take into account that children are opportunistic learners employing multiple strategies, that they do not accomplish individual tasks sequentially but acquire (partial) knowledge about multiple domains simultaneously, and that they do not acquire perfect knowledge of language. Furthermore, work in formal linguistics suggests that the Chomskyan paradigm is internally incoherent and that the formalism of the Chomskyan framework lacks mathematical precision, making it difficult to evaluate its predictions. Given that linguistics ought to provide crucial input for language acquisition research, more attention needs to be paid to non-Chomskyan work in linguistics.
I completely agree with Ambridge, Pine, and Lieven (AP&L) that anyone proposing a learning-strategy component needs to demonstrate precisely how that component helps solve the language acquisition task. To this end, I discuss how computational modeling is a tool well suited to doing exactly this, and that it has the added benefit of highlighting hidden assumptions underlying learning strategies. I also suggest general criteria relating to utility and usability that we can use to evaluate potential learning strategies. As a response to AP&L's request for Universal Grammar (UG) components that actually do work, I additionally provide a review of one potentially UG component that is part ofa successful learning strategy for syntactic islands, and that satisfies the evaluation criteria I propose.
Comprehending and producing words is a natural process for human speakers. In linguistic theory, investigating this process formally and computationally is often done by focusing on forms only. By moving beyond the world of forms, we show in this study that the discriminative lexicon (DL) model—operating with word comprehension as a mapping of form onto meaning, and word production as a mapping of meaning onto form—generates accurate predictions about what meanings listeners understand and what forms speakers produce. Furthermore, we show that measures derived from the computational model are predictive for human reaction times. Although mathematically very simple, the linear mappings between form and meaning posited by our model are powerful enough to capture the complexity and productivity of a Semitic language with a complex hybrid morphological system.
In this commentary, we focus on the linking problem Ambridge, Pine, and Lieven identify. Instead of taking a stance on the issue of universal grammar itself, we adopt an epistemological and methodological perspective on language acquisition research. We argue that the problem, linking the input to preexisting representations, constitutes just a small part of a larger methodological problem, namely how to link the input a learner receives, through a set of learning mechanisms and possibly innate representations, to the behavior the learner is producing, whether in spontaneous production or in laboratory experiments. Currently, none of the existing proposals provide an evaluated, or even a testable, account of the full process, that is, an input-output model of linguistic ontogeny. Although the focus on phenomena in isolation, probably an effect of the prevalence of the experimental method, allows the researcher some degree of control, it also distracts from the understanding of how the different mechanisms behave and interact. Our proposed solution is a more Holistic approach to the problem in which the learning mechanisms and their interaction are made fully Explicit, and in which the predicted behavior of the learner is (more) Globally evaluated. Computational modeling provides exactly the tools appropriate for this task, thereby furthering more precise and testable theories of the learning mechanisms involved.
This article reports on an experiment with miniature artificial languages that provides support for a synthesis of ideas from usage-based phonology (Bybee 1985, 2001, Nesset 2008) and harmonic grammar (Legendre et al. 1990, Smolensky & Legendre 2006). All miniature artificial languages presented to subjects feature velar palatalization (k → t∫) before a plural suffix, -i. I show that (i) examples of -i simply attaching to a [t∫]-final stem help palatalization (supporting t → t∫i over t → ti and p → t∫i over p → pi), a finding that provides specific support for product-oriented schemas like ‘plurals should end in [t∫i]’; (ii) learners tend to perseverate on the form they know, leveling stem changes, which provides support for paradigm-uniformity constraints in favor of retaining gestures composing the known form, for example, ‘keep labiality’; and (iii) the same plural schema helps untrained singular-plural mappings more than it helps trained ones. This result is accounted for by proposing that schemas and paradigm-uniformity constraints clamor for candidate plural forms that obey them. Given that competition is between candidate outputs, the same schema provides more help to candidates that violate strong paradigm-uniformity constraints and are therefore weak relative to competitor candidates. A computational model of schema extraction is proposed.
A bilingual's two languages can interact in their mind, but the mechanism of this interaction is still open to debate. In this article we employ a variant of GRADIENT SYMBOLIC COMPUTATION (GSC; Smolensky et al. 2014) to model the code-switched utterances of unbalanced Dutch-English bilinguals. We aimed to evaluate GSC as an appropriate architecture to model bilingual code-switching grammars, and to explore the extent of variability within and across individual bilingual speakers. The results indicate that the structure of individual grammars can vary widely from the structure of the grammar that emerges when the population is studied as a whole. We interpret these results as evidence that individual variation characterizes not only language processing (e.g. Fricke et al. 2019, Kidd et al. 2018), but also the structure of bilingual grammar itself.
This study investigated how alcohol intoxication and negative mood affect decision-making in the Iowa Gambling Task (IGT) in a high-risk sample of adults who regularly drink alcohol. Using a 2×2 between-subjects design (N=160), we experimentally manipulated alcohol intoxication (target BrAC=.06% vs. .00%) and mood (negative vs. neutral) and employed computational modeling to identify underlying mechanisms. Results showed that alcohol intoxication impaired IGT performance, with intoxicated participants selecting fewer cards from advantageous decks (estimate=−8.12, 95% CI=[−12.83, −3.23]). Evidence for an effect of negative mood was moderate but inconclusive (estimate=−4.82, 95% CI=[−9.66, 0.02]). Computational modeling revealed that both alcohol (estimate=.13, 95% CI=[.05, .21]) and negative mood (estimate=.12, 95% CI=[.04, .20]) increased reward learning rates without affecting punishment learning rates. No interaction effects were observed. These findings suggest that impaired decision-making during alcohol intoxication and negative mood states stems from heightened sensitivity to immediate rewards rather than diminished sensitivity to punishments but these effects do not appear to be additive, providing novel insights into the computational mechanisms underlying alcohol-related decision-making deficits.
Linguistic illusions are cases where we systematically misunderstand, misinterpret, or fail to notice anomalies in the linguistic input, despite our competencies. Revealing fresh insights into how the mind represents and processes language, this book provides a comprehensive overview of research on this phenomenon, with a focus on agreement attraction, the most widely studied linguistic illusion. Integrating experimental, computational, and formal methods, it shows how the systematic study of linguistic illusions offers new insights into the cognitive architecture of language and language processing mechanisms. It synthesizes past findings and proposals, offers new experimental and computational data, and identifies directions for future research, helping readers navigate the rapidly growing body of research and conflicting findings. With clear explanations and cross-disciplinary appeal, it is an invaluable guide for both seasoned researchers, and newcomers seeking to deepen their understanding of language processing, making it a vital resource for advancing the field.
This textbook introduces the fundamentals of MATLAB for behavioral sciences in a concise and accessible way. Written for those with or without computer programming experience, it works progressively from fundamentals to applied topics, culminating in in-depth projects. Part I covers programming basics, ensuring a firm foundation of knowledge moving forward. Difficult topics, such as data structures and program flow, are then explained with examples from the behavioral sciences. Part II introduces projects for students to apply their learning directly to real-world problems in computational modelling, data analysis, and experiment design, with an exploration of Psychtoolbox. Accompanied by online code and datasets, extension materials, and additional projects, with test banks, lecture slides, and a manual for instructors, this textbook represents a complete toolbox for both students and instructors.
Chapter 6 revisits the grammatical asymmetry, a key effect in agreement attraction research. The grammatical asymmetry refers to the phenomenon where attraction effects occur in ungrammatical sentences but not in grammatical ones. This chapter evaluates existing evidence, particularly in response to challenges raised by Hammerly et al. (2019), who claimed that the empirical evidence for the asymmetry is not particularly strong and that the effect could be a product of response bias rather than an inherent property of agreement attraction. Through a detailed review of over ninety experiments, the chapter finds strong support for a grammatical asymmetry, as predicted by the retrieval-based account. Additionally, it explores how altering the ratio of ungrammatical to grammatical fillers in experiments can influence retrieval mechanisms and artificially produce a symmetrical attraction profile, yielding the response bias effect observed by Hammerly et al. These findings suggest that a symmetrical profile could emerge from increased uncertainty in memory retrieval rather than faulty linguistic representations, offering a nuanced interpretation of existing behavioral findings.
Chapter 12 presents the first application of MATLAB to behavioral sciences: modeling behavioral phenomena using MATLAB. Students learn basic computational modeling principles before applying their programming knowledge from Chapters 1 to 11 to model two types of behavior. First, classical conditioning is modeled using the Rescorla-Wagner model, which is used to make predictions about how an organism will react to multiple stimuli when presented together, such as in the classic case of Pavlov’s dog who was trained to salivate to the sound of a bell. Next, foraging behavior in animals is modeled, wherein agents forage for food on patches of resources, learning from experience when to exploit their current patch or explore in search of more food.
How do we understand any sentence, from the most ordinary to the most creative? The traditional assumption is that we rely on formal rules combining words (compositionality). However, psycho- and neuro-linguistic studies point to a linguistic representation model that aligns with the assumptions of Construction Grammar: there is no sharp boundary between stored sequences and productive patterns. Evidence suggests that interpretation alternates compositional (incremental) and noncompositional (global) strategies. Accordingly, systematic processes of language productivity are explainable by analogical inferences rather than compositional operations: novel expressions are understood 'on the fly' by analogy with familiar ones. This Element discusses compositionality, alternative mechanisms in language processing, and explains why Construction Grammar is the most suitable approach for formalizing language comprehension.
Research in behavioral decision-making has produced many models of decision under risk. To improve our understanding of choice under risk, it is essential to perform rigorous model comparisons over large sets of decision settings to find which models are most useful. Recently, such large-scale comparisons have produced conflicting conclusions: A variant of cumulative prospect theory (CPT) was the best model in a study by He, Analytis, and Bhatia (2022), whereas variants of the model BEAST were the best in two choice prediction competitions. This study delves into these contradictions to identify and explore the underlying reasons. We replicate and extend the analysis by He et al., this time incorporating BEAST, which was previously excluded because it cannot be analytically estimated. Our results show that while CPT excels in systematically hand-crafted tasks, BEAST—originally designed for broader decision-making contexts—matches or even surpasses CPT’s performance when choice tasks are randomly selected, and predictions are made for new, unknown decision makers. This success of BEAST, very different from classical decision models—as it does not assume, for example, subjective transformations of outcomes and probabilities—puts into question previous conclusions concerning the underlying psychological mechanisms of choice under risk. Our results challenge the field to expand beyond established evaluating techniques and highlight the importance of an inclusive approach toward nonanalytic models, like BEAST, to achieve more objective insights into decision-making behavior.
Computational models allow researchers to formulate explicit theories of language acquisition, and to test these theories against natural language corpora. This chapter puts the problem of bilingual phonetic and phonological acquisition in a computational perspective. The main goal of the chapter is to show how computational modeling can be used to address crucial questions regarding bilingual phonetic and phonological acquisition, which would be difficult to address with other experimental methods. The chapter first provides a general introduction to computational modeling, using a simplified model of phonotactic learning as an example to illustrate the main methodological issues. The chapter then gives an overview of recent studies that have begun to address the computational modeling of bilingual phonetic and phonological acquisition, focusing on phonetic and phonological cues for bilingual input separation, bilingual phonology in computational models of speech comprehension, and computational models of L2 speech perception. The chapter concludes by discussing several key challenges in the development of computational models of bilingual phonetic and phonological acquisition.
In this chapter, we thoroughly describe the L2LP model, its five ingredients to explain speech development from first contact with a language or dialect (initial state) to proficiency comparable to a native speaker of the language or dialect (ultimate attainment), and its empirical, computational, and statistical method. We present recent studies comparing different types of bilinguals (simultaneous and sequential) and explaining their differential levels of ultimate attainment in different learning scenarios. We also show that although the model has the word “perception” in its name, it was designed to also explain phonological development in general, including lexical development, speech production, and orthographic effects. The chapter demonstrates that the L2LP model can be regarded as a comprehensive theoretical, computational, and probabilistic model or framework for explaining how we learn the phonetics and phonology of multiple languages (sequentially or simultaneously) with variable levels of language input throughout the life span.