To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial Intelligence is an area of law where legal frameworks are still in early stages. The chapter discusses some of the core HCI-related concerns with AI, including deepfakes, bias and discrimination, and concepts within AI and intellectual property including AI infringement and AI protection
Designed for educators, researchers, and policymakers, this insightful book equips readers with practical strategies, critical perspectives, and ethical insights into integrating AI in education. First published in Swedish in 2023, and here translated, updated, and adapted for an English-speaking international audience, it provides a user-friendly guide to the digital and AI-related challenges and opportunities in today's education systems. Drawing upon cutting-edge research, Thomas Nygren outlines how technology can be usefully integrated into education, not as a replacement for humans, but as a tool that supports and reinforces students' learning. Written in accessible language, topics covered include AI literacy, source awareness, and subject-specific opportunities. The central role of the teacher is emphasized throughout, as is the importance of thoughtful engagement with technology. By guiding the reader through the fastevolving digital transformation in education globally, it ultimately enables students to become informed participants in the digital world.
Given the potential of generative artificial intelligence (GenAI) to create human clones, it is not surprising that chatbots have been implemented in politics. In a turbulent political context, these AI-driven bots are likely to be used to spread biased information, amplify polarisation, and distort our memories. Large language models (LLMs) lack ‘political memory’ and cannot accurately process political discourses that draw from collective political memory. We refer to research concerning collective political memory and AI to present our observations of a chatbot experiment undertaken during the Presidential Elections in Finland in early 2024. This election took place at a historically crucial moment, as Finland, traditionally an advocate of neutrality and peacefulness, had become a vocal supporter of Ukraine and a new member state of NATO. Our research team developed LLM-driven chatbots for all presidential candidates, and Finnish citizens were afforded the chance to engage with these chatbot–politicians. In our study, human–chatbot discussions related to foreign and security politics were especially interesting. While rhetorically very typical and believable in light of real political speech, chatbots reorganised prevailing discourses generating responses that distorted the collective political memory. In actuality, Russia’s full-scale invasion of Ukraine had drastically changed Finland’s political positioning. Our AI-driven chatbots, or ‘electobots’, continued to promote constructive dialogue with Russia, thus earning our moniker ‘Finlandised Bots’. Our experiment highlights that training AI for political purposes requires familiarity with the prevailing discourses and attunement to the nuances of the context, showcasing the importance of studying human–machine interactions beyond the typical viewpoint of disinformation.
Extant work shows that generative AI such as GPT-3.5 and perpetuate social stereotypes and biases. A less explored source of bias is ideology: do GPT models take ideological stances on politically sensitive topics? We develop a novel approach to identify ideological bias and show that it can originate in both the training data and the filtering algorithm. Using linguistic variation across countries with contrasting political attitudes, we evaluate average GPT responses in those languages. GPT output is more conservative in languages conservative societies (polish) and more liberal in languages used in liberal ones (Swedish). These differences persist from GPT-3.5 to GPT-4. We conclude that high-quality, curated training data are essential for reducing bias.
Central to drawn representations of activism and memory are ideas of embodiment and trace. From DIY protest signs to craftivism, the articulation of protest and memory is connected to the handmade trace of a witnessing individual present in time and place. This is reflected in comics scholarship through the notion of the drawn line conveying subjective experience through the trace of the body.
This article will consider the relationship between witnessing, truth claims, autographic drawing, and memory at a moment when AI image-generation tools have called into question the connection of drawn traces to their origin in time, space, materiality, and the body.
Although a combination of critical AI theory and comics studies, this article will outline ways in which generative AI presents a challenge to these ideas. Through comparison of Joe Sacco’s graphic reportage with recent AI images of conflict and history, the article considers the truth claims of images that are the products of computational and algorithmic processes considered broadly.
Comics scholarship has been slow to critically respond to these new conditions, and the task of disentangling the human/non-human in ontologies of trace is now compounded by generative drawings, which represent the outcome of archival reappropriation defined by opaque algorithmic parameters. This article will explore theoretical assumptions around authenticity and truth claims in analogue, computational, algorithmic, and generative drawing practice and ask what kinds of theory and practice are appropriate if activist graphic memoir is to endure as documents of political memory.
Advanced AI (generative AI) poses challenges to the practice of law and to society as a whole. The proper governance of AI is unresolved but will likely be multifaceted (soft law such as standardisation, best practices and ethical guidelines), as well as hard law consisting of a blend of existing law and new regulations. This chapter argues that ‘lawyer’s professional codes’ of conduct (ethical guidelines) provide a governance system that can be applied to the AI industry. The increase in professionalisation warrants the treating of AI creators, developers and operators, as professionals subject to the obligations foisted on the legal profession and other learned professions. Legal ethics provides an overall conceptual structure that can guide AI development serving the purposes of disclosing potential liabilities to AI developers and building trust for the users of AI. Additionally, AI creators, developers and operators should be subject to fiduciary duty law. Fiduciary duty law as applied to these professionals would require a duty of care in designing safe AI systems, a duty of loyalty to customers, users and society not to create systems that manipulate consumers and democratic governance and a duty of good faith to create beneficial systems. This chapter advocates the use of ethical guidelines and fiduciary law not as soft law but as the basis of structuring private law in the governance of AI.
Education aims to improve our innate abilities, teach new skills and habits, and nurture intellectual virtues. Poorly designed or misused generative AI disrupts these educational goals. I propose strategies to design generative AI that aligns with education’s aims. The paper proposes a design for a generative AI tutor that teaches students to question well. I argue that such an AI can also help students learn to lead noble inquiries, achieve deeper understanding, and experience a sense of curiosity and fascination. Students who learn to question effectively through such an AI tutor may also develop crucial intellectual virtues.
The integration of Generative Artificial Intelligence (GAI) is reshaping traditional legal pedagogy by introducing new dimensions to research, drafting, and personalized learning. Despite its transformative potential, the application of GAI in legal education remains promising and is accompanied by such challenges as algorithmic bias, privacy concerns, and the generation of fabricated or inaccurate content—issues that continue to divide scholarly opinion on its adoption. With this in mind, the University of Kansas School of Law offered a one-credit, one-hour course on “AI for Lawyers” during the summer of 2025. Along with reproducing the syllabus of the course and some of the professor’s observations, the paper includes an overview of AI and legal pedagogy and some recommendations going forward for integrating AI into law school curricula.
The chapter highlights the importance of AI literacy. Opportunities and challenges that AI creates in the educational context, such as strategies for technology use, and what AI tools like ChatGPT can enable and hinder in the learning process.
This scoping review directs attention to artificial intelligence–mediated informal language learning (AI-ILL), defined as autonomous, self-directed, out-of-class second and foreign language (L2) learning practices involving AI tools. Through analysis of 65 empirical studies published up to mid-April 2025, it maps the landscape of this emerging field and identifies the key antecedents and outcomes. Findings revealed a nascent field characterized by exponential growth following ChatGPT’s release, geographical concentration in East Asia, methodological dominance of cross-sectional designs, and limited theoretical foundations. Analysis also demonstrated that learners’ AI-mediated informal learning practices are influenced by cognitive, affective, and sociocontextual factors, while producing significant benefits across linguistic, affective, and cognitive dimensions, particularly enhanced speaking proficiency and reduced communication anxiety. This review situates AI-ILL as an evolving subfield within intelligent CALL and suggests important directions for future research to understand the potential of constantly emerging AI technologies in supporting autonomous L2 development beyond the classroom.
Artificial Intelligence (AI) has reached memory studies in earnest. This partly reflects the hype around recent developments in generative AI (genAI), machine learning, and large language models (LLMs). But how can memory studies scholars handle this hype? Focusing on genAI applications, in particular so-called ‘chatbots’ (transformer-based instruction-tuned text generators), this commentary highlights five areas of critique that can help memory scholars to critically interrogate AI’s implications for their field. These are: (1) historical critiques that complicate AI’s common historical narrative and historicize genAI; (2) technical critiques that highlight how genAI applications are designed and function; (3) praxis critiques that centre on how people use genAI; (4) geopolitical critiques that recognize how international power dynamics shape the uneven global distribution of genAI and its consequences; and (5) environmental critiques that foreground genAI’s ecological impact. For each area, we highlight debates and themes that we argue should be central to the ongoing study of genAI and memory. We do this from an interdisciplinary perspective that combines our knowledge of digital sociology, media studies, literary and cultural studies, cognitive psychology, and communication and computer science. We conclude with a methodological provocation and by reflecting on our own role in the hype we are seeking to dispel.
Generative artificial intelligence (AI), particularly large language models, offers transformative potential for the management and operation of urban water systems. As water utilities face increasing pressures from climate change, ageing infrastructure and population growth, AI-driven tools provide new opportunities for real-time monitoring, predictive maintenance and enhanced decision support. This article explores how generative AI can revolutionise the water industry by enabling more efficient operations, improved customer engagement and advanced training mechanisms. It examines current applications, such as AI-integrated supervisory control and data acquisition systems and conversational interfaces, and evaluates their performance through emerging case studies. While highlighting the benefits, the article also addresses key challenges, including data privacy, model reliability, ethical considerations and regulatory uncertainty. Through a balanced analysis of opportunities and risks, this study outlines future directions for research and policy, offering practical recommendations for the responsible adoption of generative AI in urban water management to improve resilience, efficiency and sustainability across the sector.
This short research article interrogates the rise of digital platforms that enable ‘synthetic afterlives’, with a focus on how deathbots – AI-driven avatar interactions grounded in personal data and recordings – reshape memory practices. Drawing on socio-technical walkthroughs of four platforms – Almaya, HereAfter, Séance AI, and You, Only Virtual – we analyse how they frame, archive, and algorithmically regenerate memories. Our findings reveal a central tension: between preserving the past as a fixed archive and continually reanimating it through generative AI. Our walkthroughs demonstrate how these services commodify remembrance, reducing memory to consumer-driven interactions designed for affective engagement while obscuring the ethical, epistemological and emotional complexities of digital commemoration. In doing so, they enact reductive forms of memory that are embedded within platform economies and algorithmic imaginaries.
Chapter 3 examines the regulatory approaches outlined in the Artificial Intelligence Act (AIA) concerning Emotion Recognition Systems (ERS). As the first legislation specifically addressing ERS, the EU’s AI Act employs a multilayered framework that classifies these systems as both limited and high-risk AI technologies. By categorising all ERS as limited risk, the AIA aims to eliminate the practice of inferring emotions or intentions from individuals without their awareness. Additionally, all ERS must adhere to the stringent requirements set for high-risk AI systems. The use of AI systems for inferring emotions in workplace and educational settings is classified as an unacceptable risk and thus prohibited. Considering the broader context, the regulation of ERS represents a nuanced effort by legislators to balance the promotion of innovation with the necessity of imposing rigorous safeguards. However, this book contends that the AIA should not be seen as the ultimate regulation of MDTs. Instead, it serves as a general framework or baseline that requires further legal measures, including additional restrictions or prohibitions through sector-specific legislation.
Generative AI based on large language models (LLM) currently faces serious privacy leakage issues due to the wide range of parameters and diverse data sources. When using generative AI, users inevitably share data with the system. Personal data collected by generative AI may be used for model training and leaked in future outputs. The risk of private information leakage is closely related to the inherent operating mechanism of generative AI. This indirect leakage is difficult to detect by users due to the high complexity of the internal operating mechanism of generative AI. By focusing on the private information exchanged during interactions between users and generative AI, we identify the privacy dimensions involved and develop a model for privacy types in human–generative AI interactions. This can provide a reference for generative AI to avoid training private data and help it provide clear explanations of relevant content for the types of privacy users are concerned about.
This article explores the transformational potential of artificial intelligence (AI), particularly generative AI (genAI) – large language models (LLMs), chatbots, and AI-driven smart assistants yet to emerge – to reshape human cognition, memory, and creativity. First, the paper investigates the potential of genAI tools to enable a new form of human-computer co-remembering, based on prompting rather than traditional recollection. Second, it examines the individual, cultural, and social implications of co-creating with genAI for human creativity. These phenomena are explored through the concept of Homo Promptus, a figure whose cognitive processes are shaped by engagement with AI. Two speculative scenarios illustrate these dynamics. The first, ‘prompting to remember’, analyses genAI tools as cognitive extensions that offload memory work to machines. The second scenario, ‘prompting to create’, explores changes in creativity when performing together with genAI tools as co-creators. By mobilising concepts from cognitive psychology, media and memory studies, together with Huizinga’s exploration of play, and Rancière’s intellectual emancipation, this study argues that genAI tools are not only reshaping how humans remember and create but also redefining cultural and social norms. It concludes by calling for ‘critical’ engagement with the societal and intellectual implications of AI, advocating for research that fosters adaptive and independent (meta)cognitive practices to reconcile digital innovation with human agency.
This research explores concertinaing past, present and future interventional creative and pedagogical practices to address the challenges of the Post-Anthropocene era. We argue that the Post-Anthropocene is marked by biotechnological entanglements, environmental violence and digital overstimulation. The discussions herein critique a hyperattentive achievement society characterised by a scattering of attention, a near-constant screen-mediated stream of digital material and tasks and the commodification of leisure time. Enlisting Byung-Chul Han’s concept of hyperattention and themes and motifs from David Cronenberg’s films, the authors propose “FUTURE PROOF re(image)ining” as a collaborative Cli-Fi narrative concept. The project reimagines objects from an initial art installation with a diffusion-based machine learning model. By drawing on a constellation of Taoist philosophical practices, Zen garden design, scholars’ rocks and Cronenbergian themes, the authors propose an exhibition featuring reimagined cave-like gongshi rock structures and objects. A triangulation of spaces for FUTURE PROOF participants to inhabit facilitates an unfolding contemplative-creative trajectory. The concept includes a sensory deprivation cave, a View-Master cave for focused stereoscopic image viewing and a haiku/soundscape cave to initiate experiences. FUTURE PROOF aims to promote deep contemplation, challenging some of the deleterious aspects of Western digital-algorithmic screen culture and cultivating relationality with an always more-than-human world.
This article constructs an approach to analyzing longitudinal panel data which combines topological data analysis (TDA) and generative AI applied to graph neural networks (GNNs). TDA is deployed to identify and analyze unobserved topological heterogeneities of a dataset. TDA-extracted information is quantified into a set of measures, called functional principal components. These measures are used to analyze the data in four ways. First, the measures are construed as moderators of the data and their statistical effects are estimated through a Bayesian framework. Second, the measures are used as factors to classify the data into topological classes using generative AI applied to GNNs constructed by transforming the data into graphs. The classification uncovers patterns in the data which are otherwise not accessible through statistical approaches. Third, the measures are used as factors that condition the extraction of latent variables of the data through a deployment of a generative AI model. Fourth, the measures are used as labels for classifying the graphs into classes used to offer a GNN-based effective dimensionality reduction of the original data. The article uses a portion of the militarized international disputes (MIDs) dataset (from 1946 to 2010) as a running example to briefly illustrate its ideas and steps.
Generative AI (GenAI) offers potential for English language teaching (ELT), but it has pedagogical limitations in multilingual contexts, often generating standard English forms rather than reflecting the pluralistic usage that represents diverse sociolinguistic realities. In response to mixed results in existing research, this study examines how ChatGPT, a text-based generative AI tool powered by a large language model (LLM), is used in ELT from a Global Englishes (GE) perspective. Using the Design and Development Research approach, we tested three ChatGPT models: Basic (single-step prompts); Refined 1 (multi-step prompting); and Refined 2 (GE-oriented corpora with advanced prompt engineering). Thematic analysis showed that Refined Model 1 provided limited improvements over Basic Model, while Refined Model 2 demonstrated significant gains, offering additional affordances in GE-informed evaluation and ELF communication, despite some limitations (e.g., defaulting to NES norms and lacking tailored GE feedback). The findings highlight the importance of using authentic data to enhance the contextual relevance of GenAI outputs for GE language teaching (GELT). Pedagogical implications include GenAI–teacher collaboration, teacher professional development, and educators’ agentive role in orchestrating diverse resources alongside GenAI.
The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes. However, designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs, and there is a limited framework that describes the process of co-ideation between a designer and ChatGPT. This research thus aimed to explore how LLMs can act as codesigners and influence creative ideation processes of industrial designers and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with custom GPT. A survey was first conducted to detect how LLMs influenced the creative ideation processes of industrial designers and to understand the problems that designers face when using ChatGPT to ideate. Then, a framework which based on mapping content to guide the co-ideation between humans and custom GPT (named as Co-Ideator) was promoted. Finally, a design case study followed by a survey and an interview was conducted to evaluate the ideation performance of the custom GPT and framework compared with traditional ideation methods. Also, the effect of custom GPT on co-ideation was compared with a non-artificial intelligence (AI)-used condition. The findings indicated that if users employed co-ideation with custom GPT, the novelty and quality of ideation outperformed by using traditional ideation.