To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Starting from the evolution of the protection of human rights on the internet, the first part of this chapter analyses the proposals for new digital human rights and the methodology of their creation in different forums such as the Council of Europe and European Union as well as related processes in the United Nations Human Rights Council. The second part focuses on the challenges related to the rapid developments in artificial intelligence, such as ChatGPT, for the protection of human rights and regulatory efforts by the Council of Europe, in particular its Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law adopted in 2024 and the Artificial Intelligence Act of the European Union dating from the same year. Both instruments are analysed for their potential to protect human and fundamental rights in particular through new digital human rights. The contribution finds possible complementarity between the two regulatory approaches. Giving several examples, it concludes that there is an ongoing process of the concretisation of new digital human rights, which are mainly but not exclusively based on existing human rights.
The use of large language models (LLMs) has exploded since November 2022, but there is sparse evidence regarding LLM use in health, medical, and research contexts. We aimed to summarise the current uses of and attitudes towards LLMs across our campus’ clinical, research, and teaching sites. We administered a survey about LLM uses and attitudes. We conducted summary quantitative analysis and inductive qualitative analysis of free text responses. In August–September 2023, we circulated the survey amongst all staff and students across our three campus sites (approximately n = 7500), comprising a paediatric academic hospital, research institute, and paediatric university department. We received 281 anonymous survey responses. We asked about participants’ knowledge of LLMs, their current use of LLMs in professional or learning contexts, and perspectives on possible future uses, opportunities, and risks of LLM use. Over 90% of respondents have heard of LLM tools and about two-thirds have used them in their work on our campus. Respondents reported using LLMs for various uses, including generating or editing text and exploring ideas. Many, but not necessarily all, respondents seem aware of the limitations and potential risks of LLMs, including privacy and security risks. Various respondents expressed enthusiasm about the opportunities of LLM use, including increased efficiency. Our findings show LLM tools are already widely used on our campus. Guidelines and governance are needed to keep up with practice. Insights from this survey were used to develop recommendations for the use of LLMs on our campus.
With research showing the benefits of feedback, teachers have come under increasing pressure to provide more, including more personalised, and more detailed responses to students. This often places heavy demands on teachers and with ever-larger class sizes and heavier workloads, teacher fatigue and burn-out are common. Automation has the potential to change all this and new digital resources have already proven to be valuable in supporting L2 writing. In this paper I look at the contribution of Automated Writing Evaluation (AWE) programmes and Generative Artificial Intelligence (GenAI) to feedback. The ability to provide instant local and global feedback across multiple drafts targeted to student needs and in greater quantities promises to increase learner motivation and autonomy while relieving teachers of hours of marking. But haven’t we heard this all before? Are these empty claims which raise our expectations of removing some of the drudgery of mundane grammar correction? Most importantly, what is the role of teachers in all this, and can AI really improve writers and not just texts?
Education aims to improve our innate abilities, teach new skills and habits, and nurture intellectual virtues. Poorly designed or misused generative AI disrupts these educational goals. I propose strategies to design generative AI that aligns with education’s aims. The paper proposes a design for a generative AI tutor that teaches students to question well. I argue that such an AI can also help students learn to lead noble inquiries, achieve deeper understanding, and experience a sense of curiosity and fascination. Students who learn to question effectively through such an AI tutor may also develop crucial intellectual virtues.
The last decade has seen an exponential increase in the development and adoption of language technologies, from personal assistants such as Siri and Alexa, through automatic translation, to chatbots like ChatGPT. Yet questions remain about what we stand to lose or gain when we rely on them in our everyday lives. As a non-native English speaker living in an English-speaking country, Vered Shwartz has experienced both amusing and frustrating moments using language technologies: from relying on inaccurate automatic translation, to failing to activate personal assistants with her foreign accent. English is the world's foremost go-to language for communication, and mastering it past the point of literal translation requires acquiring not only vocabulary and grammar rules, but also figurative language, cultural references, and nonverbal communication. Will language technologies aid us in the quest to master foreign languages and better understand one another, or will they make language learning obsolete?
This study explores the role of ChatGPT in the completeness of collaborative computer-aided design (CAD) tasks requiring varying types of engineering knowledge. In the experiment involving 22 pairs of mechanical engineering students, three different collaborative CAD tasks were undertaken with and without ChatGPT support. The findings indicate that ChatGPT support hinders completeness in collaborative CAD-specific tasks reliant on CAD knowledge but demonstrates limited potential in assisting open-ended tasks requiring domain-specific engineering expertise. While ChatGPT mitigates task-specific challenges by providing general engineering knowledge, it fails to improve overall task completeness. The results underscore the complementary role of AI and human knowledge.
At what time does the afternoon start, at 1 p.m. or 3 p.m.? Language understanding requires the ability to correctly match statements to their real-world meaning. This mapping process is a function of the context, which includes various factors such as location and time as well as the speaker’s and listeners’ backgrounds. For example, an utterance like, “It is hot today,” would mean different things were it expressed in Death Valley versus Alaska. Based on our background and experiences, people have different interpretations for time expressions, color descriptions, geographic expressions, qualities, relative expressions, and more. This ability to map language to real-world meaning is also required from the language technology tools we use. For example, translating a recipe that contains instructions to “preheat the oven to 180 degrees” requires a translation system to understand the implicit scale (e.g. Celsius versus Fahrenheit) based on the source language and the user’s location. To date, no automatic translation systems can do this, and there is little “grounding” in any widely used language technology tool.
Non-compositional phrases such as “by and large” are phrases whose meaning cannot be unlocked by simply translating the combination of words they constitute. In particular, figurative expressions – such as idioms, similes and metaphors – are ubiquitous in English. Among other reasons, figurative expressions are acquired late in the language learning journey because they often capture cultural conventions and social norms associated with the people speaking the language. Figurative expressions are especially prevalent in creative writing, acting as the spice that adds flavor to the writing. Artificial intelligence (AI) writing assistants such as ChatGPT are now capable of editing raw drafts into well-written pieces, to the advantage of native and non-native speakers alike. These AI tools, which have gained their writing skills from exposure to vast amounts of online text, are extremely adept at generating text similar to the texts they have been exposed to. Unfortunately, they have demonstrated shortcomings in creative writing that requires deviating from the norm.
While what is said can be difficult to understand, what is not said may pose an even bigger challenge. Language is efficient, so often what goes without saying is simply not being said. It is left for the reader or listener to interpret underspecified language and resolve ambiguities, a task that we do seamlessly using our personal experience, knowledge about the world, and commonsense reasoning abilities. In many cases, commonsense knowledge helps EFL learners compensate for low language proficiency. However, what is considered “commonsense” is not always universal. Some commonsense knowledge, especially pertaining to social norms, differs between cultures. Can language technologies help bridge this cultural gap? It depends. Chatbots like ChatGPT seem to have broad knowledge about every possible topic in the world. However, ChatGPT learned about the world from reading all the English text on the web, which is primarily coming from the US, and thus it has a North American lens. In addition, despite being “book smart,” it still lacks basic commonsense reasoning abilities that are employed by us to understand social interactions and navigate the world around us.
In contrast to the rest of the book, this chapter discusses not what to say in and how to speak English but rather what is not socially acceptable to speak about in North American culture: from offensive language and profanity to sensitive topics such as sex and politics. These taboo subjects differ by culture, and EFL speakers who come from cultures that are more direct might find themselves saying something inappropriate – just as chatbots can sometimes generate offensive content. The developers of chatbots like ChatGPT have programmed filters to prevent them from generating offensive text. Those filters are based on the norms of the developers themselves, most of whom are based in North America, and this can make a chatbot’s refusal to answer some questions seem excessively careful through the lens of other cultures.
Although the internet has removed geographical boundaries, transforming the world into a global village, English is still the most dominant language online. New forms of online communication such as emoji and memes have become an integral part of internet language. While it’s tempting to think of such visual communication formats as removing the cultural barriers – after all, emoji appear like a universal alphabet – their interpretation may rely on cultural references.
The emergence of ChatGPT as a leading artificial intelligence language model developed by OpenAI has sparked substantial interest in the field of applied linguistics, due to its extraordinary capabilities in natural language processing. Research on its use in service of language learning and teaching is on the horizon and is anticipated to grow rapidly. In this review article, we purport to capture its nascency, drawing on a literature corpus of 71 papers of a variety of genres – empirical studies, reviews, position papers, and commentaries. Our narrative review takes stock of current research on ChatGPT’s application in foreign language learning and teaching, uncovers both conceptual and methodological gaps, and identifies directions for future research.
The proliferation of Artificial Intelligence (AI) is significantly transforming conventional legal practice. The integration of AI into legal services is still in its infancy and faces challenges such as privacy concerns, bias, and the risk of fabricated responses. This research evaluates the performance of the following AI tools: (1) ChatGPT-4, (2) Copilot, (3) DeepSeek, (4) Lexis+ AI, and (5) Llama 3. Based on their comparison, the research demonstrates that Lexis+ AI outperforms the other AI solutions. All these tools still encounter hallucinations, despite claims that utilizing the Retrieval-Augmented Generation (RAG) model has resolved this issue. The RAG system is not the driving force behind the results; it is one component of the AI architecture that influences but does not solely account for the problems associated with the AI tools. This research explores RAG architecture and its inherent complexities, offering viable solutions for improving the performance of AI-powered solutions.
This empirical study explores three aspects of engagement (affective, behavioral, and cognitive) in language learning within an English as a Foreign Language context in Japan, examining their relationship with AI utilization. Previous research has demonstrated that motivation positively influences AI usage. This study expands on that by connecting motivation with engagement, where AI usage serves as an intermediary construct. A total of 174 students participated in the study. Throughout the semester, they were required to use Generative AI (GenAI) to receive feedback on their writing. To prevent overreliance or plagiarism, carefully crafted prompts were selected. Students were tasked with collaboratively constructing essays during the semester using GenAI. At the end of the semester, students completed a survey measuring their motivation and engagement. Structural Equation Modeling was employed to reaffirm the previous finding that motivation influences AI usage. The results showed that AI usage impacts all three aspects of engagement. Based on these findings, the study suggests the pedagogical feasibility of implementing GenAI in writing classes with proper teacher guidance. Rather than being a threat, the use of this technological tool complements the role of human teachers and supports learning engagement.
Generative artificial intelligence (AI) systems, notably ChatGPT, have emerged in legal practice, facilitating the completion of tasks, ranging from electronic communications to the drafting of documents. The generative capabilities of these systems underscore the duty of lawyers to competently represent their clients by keeping abreast of technological developments that can enhance the efficiency and effectiveness of their work. At the same time, the processing of clients’ information through generative AI systems threatens to compromise their confidentiality if disclosed to third parties, including the systems’ providers. The present paper aims to determine the impact of the use of generative AI systems by lawyers on the duties of competence and confidentiality. The findings derive from the application of doctrinal and empirical research on the legal practice and its digitalisation in Luxembourg. The paper finally reflects on the integration of generative AI systems in legal practice to raise the quality of legal services for clients.
After its launch on 30 November 2022 ChatGPT (or Chat Generative Pre-Trained Transformer) quickly became the fastest-growing app in history, gaining one hundred million users in just two months. Developed by the US-based artificial-intelligence firm OpenAI, ChatGPT is a free, text-based AI system designed to interact with the user in a conversational way. Capable of answering complex questions with sophistication and of conversing in a breezy and impressively human style, ChatGPT can also generate outputs in a seemingly endless variety of formats, from professional memos to Bob Dylan lyrics, HTML code to screenplays and five-alarm chilli recipes to five-paragraph essays. Its remarkable capability relative to earlier chatbots gave rise to both astonishment and concern in the tech sector. On 22 March 2023 a group of more than one thousand scientists and entrepreneurs published an open letter calling for a six-month moratorium on further human-competitive AI development – a moratorium that was not observed.
Since the publication of “What is the Current and Future Status of Digital Mental Health Interventions?” the exponential growth and widespread adoption of ChatGPT have underscored the importance of reassessing its utility in digital mental health interventions. This review critically examined the potential of ChatGPT, particularly focusing on its application within clinical psychology settings as the technology has continued evolving through 2023 and 2024. Alongside this, our literature review spanned US Medical Licensing Examination (USMLE) validations, assessments of the capacity to interpret human emotions, analyses concerning the identification of depression and its determinants at treatment initiation, and reported our findings. Our review evaluated the capabilities of GPT-3.5 and GPT-4.0 separately in clinical psychology settings, highlighting the potential of conversational AI to overcome traditional barriers such as stigma and accessibility in mental health treatment. Each model displayed different levels of proficiency, indicating a promising yet cautious pathway for integrating AI into mental health practices.
This study explored the effects of interacting with ChatGPT 4.0 on L2 learners’ motivation to write English argumentative essays. Conducted at a public university in a non-English-speaking country, the study had an experimental and mixed-methods design. It utilized both quantitative and qualitative data analyses to inform the development of effective AI-enhanced tailored interventions for teaching L2 essay writing. Overall, the results revealed that interacting with ChatGPT 4.0 had a positive lasting effect on learners’ motivation to write argumentative essays in English. However, a decline in their motivation at the delayed post-intervention stage suggested the need to maintain a balance between utilizing ChatGPT as a writing support tool and enhancing their independent writing capabilities. Learners attributed the increase in their motivation to several factors, including their perceived improvement in essay writing skills, the supportive learning environment created by ChatGPT as a tutor, positive interactions with it, and the development of meta-cognitive awareness by addressing their specific writing issues. The study highlights the potential of AI-based tools in enhancing L2 learners’ motivation in English classrooms.
The advent of generative artificial intelligence (AI) models holds potential for aiding teachers in the generation of pedagogical materials. However, numerous knowledge gaps concerning the behavior of these models obfuscate the generation of research-informed guidance for their effective usage. Here, we assess trends in prompt specificity, variability, and weaknesses in foreign language teacher lesson plans generated by zero-shot prompting in ChatGPT. Iterating a series of prompts that increased in complexity, we found that output lesson plans were generally high quality, though additional context and specificity to a prompt did not guarantee a concomitant increase in quality. Additionally, we observed extreme cases of variability in outputs generated by the same prompt. In many cases, this variability reflected a conflict between outdated (e.g. reciting scripted dialogues) and more current research-based pedagogical practices (e.g. a focus on communication). These results suggest that the training of generative AI models on classic texts concerning pedagogical practices may bias generated content toward teaching practices that have been long refuted by research. Collectively, our results offer immediate translational implications for practicing and training foreign language teachers on the use of AI tools. More broadly, these findings highlight trends in generative AI output that have implications for the development of pedagogical materials across a diversity of content areas.
Recent advances in large language models (LLMs), such as GPT-4, have spurred interest in their potential applications across various fields, including actuarial work. This paper introduces the use of LLMs in actuarial and insurance-related tasks, both as direct contributors to actuarial modelling and as workflow assistants. It provides an overview of LLM concepts and their potential applications in actuarial science and insurance, examining specific areas where LLMs can be beneficial, including a detailed assessment of the claims process. Additionally, a decision framework for determining the suitability of LLMs for specific tasks is presented. Case studies with accompanying code showcase the potential of LLMs to enhance actuarial work. Overall, the results suggest that LLMs can be valuable tools for actuarial tasks involving natural language processing or structuring unstructured data and as workflow and coding assistants. However, their use in actuarial work also presents challenges, particularly regarding professionalism and ethics, for which high-level guidance is provided.