To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper explores the complex dynamics of using AI, particularly generative artificial intelligence (GenAI), in post-entry language assessment (PELA) at the tertiary level. Empirical data from trials with Diagnostic English Language Needs Assessment (DELNA), the University of Auckland’s PELA, are presented.
The first study examines the capability of GenAI to generate reading text and assessment items that might be suitable for use in DELNA. A trial of this GenAI-generated academic reading assessment on a group of target participants (n = 132) further evaluates its suitability. The second study investigates the use of a fine-tuned GPT-4o model for rating DELNA writing tasks, assessing whether automated writing evaluation (AWE) provides feedback of comparable quality to human raters. Findings indicate that while GenAI shows promise in generating content for reading assessments, expert evaluations reveal a need for refinement in question complexity and targeting specific subskills. In AWE, the fine-tuned GPT-4o model aligns closely with human raters in overall scoring but requires improvement in delivering detailed and actionable feedback.
A Strengths, Weaknesses, Opportunities, and Threats analysis highlights AI’s potential to enhance PELA by increasing efficiency, adaptability, and personalization. AI could extend PELA’s scope to areas such as oral skills and dynamic assessment. However, challenges such as academic integrity and data privacy remain critical concerns. The paper proposes a collaborative model integrating human expertise and AI in PELA, emphasizing the irreplaceable value of human judgment. We also emphasize the need to establish clear guidelines for a human-centered AI approach within PELA to maintain ethical standards and uphold assessment integrity.
This chapter proposes a novel approach to the development and assessment of English disaster and environmental risk information for people from diverse language and cultural backgrounds who require readable, more accessible translations, regardless of education level, and cultural or linguistic background.¬¬¬ To illustrate the development of machine learning classifiers for the purpose of the predictive assessment of the likelihood that an English text will be translated into an accessible language, we will use Japanese as an illustrative case study, given the language and cultural contrast between English and Japanese.
This chapter describes how to characterize data and the distribution of data. We will also describe how the shape of the normal distribution enables hypothesis testing. In the section on regression, we look at how two variables or ways of measuring data are related to each other. We will use simple linear regression as an introduction to multiple regression, the technique used in the development of a number of traditional readability measures. A more sophisticated form of regression is called logistic regression is also discussed, which will be applied in the case studies of Chapters 4 to 6.
This chapter examines the applicability and limitations of quantitative readability tools in predicting the likely readership of online environmental health educational resources. This assumes that environmental health educational resources from national health authorities or well-established health promotion organisations have been developed purposefully for and are well received by target audiences. These purposely developed, public-oriented educational materials exhibit linguistic, textual features that are large contributors to the effective communication of health messages. The chapter will develop binary classifiers to predict the likely readership of environmental health resources given their textual, linguistic features, or the readability of environmental health information for specific audiences.
This chapter introduces a novel translation quality assessment mechanism, which proposes to increase the translatability of English source texts through predictive assessment using machine learning classifiers, before sending the texts for actual translations. The benefits of the pre-editing approach are manifold. First, it can significantly increase the cost-effectiveness of the translation flow, especially for minority languages which often lack adequate staffing of translators and service support in real-life scenarios for migrant communities. Second, translatability enhancement guided by machine learning helps reduce the risks of mistranslations and miscommunications among minorities.
True crime podcasts are one of the most popular products in the landscape of media production: whether professionally produced or the fruit of amateur work, they rank highly in different charts, with a variety of topics and approaches. This article aims at starting the research on the kind of language these podcasts (might) have in common, with a particular interest in the features that might be found in so-called ‘amateur’ podcasts, which tend to have a more flexible, and colloquial, style and register. In particular, the research has focused on a sample podcast and on two representative episodes, which have been transcribed and analysed, in order to obtain an initial corpus of typical discourse markers. The focus has been specifically on pragmatic markers such as right, you know and other typical interjections of spoken interactions, which identify the register as spoken and colloquial. By using two corpus tools, the study has been able to highlight the frequency of these markers and their typical use in collocation.
We investigated how previous languages and learner individual differences impact L3 word knowledge. The participants were 93 L1-Polish learners of L2-English and L3-Italian. We tested participants’ knowledge of 120 L3-Italian words: 40 L2–L3 cognates, 40 L1–L2–L3 cognates, and 40 non-cognates, controlled for many item-related variables. The knowledge and online processing of the L3 words were measured by a test inspired by the Vocabulary Knowledge Scale and a lexical decision task (LDT), respectively. The results revealed that L1–L2–L3 cognates were known better than L2–L3 cognates, but L2–L3 cognates did not differ from non-cognates. Processing advantage was observed only for low-frequency triple cognates. Moreover, cognitive aptitudes predicted the speed of responding to the keywords in the LDT. However, they did not predict participants’ performance on the vocabulary test, where L3 proficiency effects prevailed. Our results suggest that L1–L2–L3 similarity is more conducive to learning than single-sourced L2–L3 similarity.
Bilingual adults use semantic context to manage cross-language activation while reading. An open question is how lexical, contextual and individual differences simultaneously constrain this process. We used eye-tracking to investigate how 83 French–English bilinguals read L2-English sentences containing interlingual homographs (chat) and control words (pact). Between subjects, sentences biased target language or non-target language meanings (English = conversation; French = feline). Both conditions contained unbiased control sentences. We examined the impact of word- and participant-level factors (cross-language frequency and L2 age of acquisition/AoA and reading entropy, respectively). There were three key results: (1) L2 readers showed global homograph interference in late-stage reading (total reading times) when English sentence contexts biased non-target French homograph meanings; (2) interference increased as homographs’ non-target language frequency increased and L2 AoA decreased; (3) increased reading entropy globally facilitated early-stage reading (gaze durations) in the non-target language bias condition. Thus, cross-language activation during L2 reading is constrained by multiple factors.
This Element explores literary translation into a non-native language (L2 translation), investigating how it has been regarded by translation studies, particularly in the anglophone context. L1 directionality (into the translator's L1) remains the norm in the literary translation world, reflecting a systemic bias against the multilingual subject and towards the monolingual. In a post-monolingual paradigm, the notion of a mother tongue has become increasingly problematic. What are the implications of this for directionality in translation? Studies on L2 translation still focus on and privilege the native speaker. Applying the notion of exophony (i.e., writing in a foreign language) to translation (in what is termed exophonic translation), this Element draws on insights from sociolinguistics, applied linguistics, translation history, and translator studies to lay the groundwork in advocating for an exophonic, multilingual turn in translation studies. To what extent can this change the way L2 translation is approached and studied?
The prevalence of digital technologies, augmented by the emergence of generative AI, expands opportunities for language learning and use, empowers new modes of learning, and blurs the boundaries of in-class and out-of-class language learning. The language education community is challenged to reconceptualize the paradigm of language learning and utilize the affordances of technologies to synergize in-class and out-of-class language learning. To achieve this, in-depth understanding of in-class learning and out-of-class digital experiences in relation to one another is needed to inform curriculum and pedagogy conceptualization and implementation. With this aim in mind, we put forth a research agenda around six research themes. We hope that this Thinking Allowed piece can stimulate and guide systematic research efforts towards unleashing the potential of technologies to synergize in-class and out-of-class language learning and create holistic and empowering learning experiences for language learners.
Theories about translation and about translation equivalence that have held sway over time are discussed, and corpus exploration is introduced and practised. Methods for investigating the cognitive processes involved in translating include reports by translators themselves about their cognitive activity, but also methods that allow researchers to track translators’ behaviour – in particular their eye movements and gaze and their use of the keyboard when typing their translations. Methods for tracking brain activity during translating are introduced and explained, and the influence of emotion, a relatively recent interest in the discipline, is highlighted. Influential figures in the establishment of translation studies as an independent discipline are introduced.