To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the framework of the common objective of this volume, this chapter focuses on the technological element –expressed in AI– which is usually part of the definition of remote work. This chapter discusses how AI tools shape the organization and performance of remote work, how algorithms impact remote workers rights and how trade unions and workers can harness these powerful instruments to improve working and living conditions. Three hypotheses are considered. First, that AI systems and algorithmic management generate a de facto deepening of the subordinate position of the worker. Second, that this process does not represent technological determinism but instead the impact of human and institutional elements. And finally, that technological resources usually are more present in remote work than in traditional work done at the workplace. These hypotheses and concerns are addressed in several ways: by contextualizing the issue over time, through a multi-level optic centered on the interactions of different levels of regulation, by examining practical dimensions and finally by exploring the implications for unions and worker agency.
This Element provides an overview of FinTech branches and analyzes the associated institutional forces and economic incentives, offering new insights for optimal regulation. First, it establishes a fundamental tension between addressing existing financial inefficiencies and introducing new economic distortions. Second, it demonstrates that today's innovators have evolved from pursuing incremental change through conventional Fin-Tech applications to AI × crypto as the fastest-growing segment. The convergence of previously siloed areas is creating an open-source infrastructure that reduces entry costs and enables more radical innovation, further amplifying change. Yet this transformation introduces legal uncertainty and risks related to liability, cybercrime, taxation, and adjudication. Through case studies across domains, the Element shows that familiar economic tradeoffs persist, suggesting opportunities for boundary-spanning regulation. It offers regulatory solutions, including RegTech frameworks, compliance-incentivizing mechanisms, collaborative governance models, proactive enforcement of mischaracterizations, and alternative legal analogies for AI × crypto.
Against a backdrop of rapidly expanding health artificial intelligence (AI) development, this paper examines how the European Union’s (EU) stringent digital regulations may incentivise the outsourcing of personal health data collection to low- and middle-income countries (LMICs), fuelling a new form of AI ethics dumping. Drawing on parallels with the historical offshoring of clinical trials, we argue that current EU instruments, such as the General Data Protection Regulation (GDPR), Artificial Intelligence Act (AI Act) and Medical Devices Regulation, impose robust internal safeguards but do not prevent the use of health data collected unethically beyond EU borders. This regulatory gap enables data colonialism, whereby commercial actors exploit weaker legal environments abroad without equitable benefit-sharing. Building on earlier EU responses to ethics dumping in clinical trials, we propose legal and policy pathways to prevent similar harms in the context of AI.
The book concludes by offering a discussion of how the investigation of nuclear status contributes to nuclear policy and the future of technological change in world politics. The 2017 Treaty on the Prohibition of Nuclear Weapons represents a promising, though limited, attempt at moving beyond the NPT’s legal categories by challenging the state-centrism of the global nuclear regime. And from a policy perspective, I argue that when diplomats and policymakers focus entirely on nuclear capability, they miss opportunities to engage with and address a state’s status anxiety. Negotiating with Iran and North Korea requires understanding not only their material pursuits but also the status anxieties that motivate those pursuits. Finally, the conclusion discusses how the theoretical framework of nuclear status presented in this book could be applied to understanding burgeoning technological advances in artificial intelligence.
Artificial intelligence is increasingly being used in medical practice to complete tasks that were previously completed by the physician, such as visit documentation, treatment plans and discharge summaries. As artificial intelligence becomes a routine part of medical care, physicians increasingly trust and rely on its clinical recommendations. However, there is concern that some physicians, especially those younger and less experienced, will become over-reliant on artificial intelligence. Over-reliance on it may reduce the quality of clinical reasoning and decision-making, negatively impact patient communications and raise the potential for deskilling. As artificial intelligence becomes a routine part of medical treatment, it is imperative that physicians recognise the limitations of artificial intelligence tools. These tools may assist with basic administrative tasks but cannot replace the uniquely human interpersonal and reasoning skills of physicians. The purpose of this feature article is to discuss the risks of physician deskilling based on increasing reliance on artificial intelligence.
In an era of accelerating ecological degradation, how might experimental art practices help audiences foster deeper, more empathetic engagement with the intelligence of living systems? This paper explores the potential of contemporary art, when aligned with ecological science, to reframe forest regeneration as a site of aesthetic and ethical inquiry — by regarding the forest as a primary composer within artistic and ecological frameworks. It asks: how might this approach underpin a novel form of “Critical Forest Pedagogy” capable of deepening our understanding of the collective natural intelligence of the living world and encouraging long-term conservation?
To test these ideas, a new art-science project, Forest Art Intelligence, was initiated, framing a regenerating forest as an evolving, living artwork. Because forests evolve through stages mediated by life, death, regeneration and human influence, those stages of growth can also be framed as “process art” — a practice that values each stage of an artwork’s transformation. Collectively therefore this approach proposes a form of art-led “Critical Forest Pedagogy” suited to engaging communities traditionally unaligned with conservation, while remaining relevant to ecologically cognate audiences. It further asks whether this framing might promote a rethinking of restrictive, human-centred definitions of intelligence that underpin generative AI.
How much do citizens support artificial intelligence (AI) in government and politics at different levels of decision‐making authority and to what extent is this AI support associated with citizens’ conceptions of democracy? Using original survey data from Germany, the analysis shows that people are overall sceptical toward using AI in the political realm. The findings suggest that how much citizens endorse democracy as liberal democracy as opposed to several of its disfigurations matters for AI support, but only in high‐level politics. While a stronger commitment to liberal democracy is linked to lower support for AI, the findings contradict the idea that a technocratic notion of democracy lies behind greater acceptance of political AI uses. Acceptance is higher only among those holding reductionist conceptions of democracy which embody the idea that whatever works to accommodate people's views and preferences is fine. Populists, in turn, appear to be against AI in political decision making.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
The development of generative artificial intelligence raises justified concerns about the possibility of undermining trust in democratic processes, especially elections. Deep fakes are often considered one of the particularly dangerous forms of media manipulation. Subsequent research confirms that they contribute to strengthening the sense of uncertainty among citizens and negatively affect the information environment. The aim of this study is to analyse the use of deep fakes in 11 countries in 2023, in the context of elections and to indicate potential consequences for future electoral processes, in particular with regard to the significant number of elections in 2024. We argue that a so-called “information apocalypse” emerges mainly from exaggeratedly alarmist voices that make it difficult to shape responsible narratives and may have the features of a self-fulfilling prophecy. Thus, we suggest to use the term “pollution” instead and improve scientific and journalistic discourse that might be a precondition for reducing threats that touch on social reactions to deep fakes and their potential.
The promise of algorithmic decision-making (ADM) lies in its capacity to support or replace human decision-making based on a superior ability to solve specific cognitive tasks. Applications have found their way into various domains of decision-making—and even find appeal in the realm of politics. Against the backdrop of widespread dissatisfaction with politicians in established democracies, there are even calls for replacing politicians with machines. Our discipline has hitherto remained surprisingly silent on these issues. The present article argues that it is important to have a clear grasp of when and how ADM is compatible with political decision-making. While algorithms may help decision-makers in the evidence-based selection of policy instruments to achieve pre-defined goals, bringing ADM to the heart of politics, where the guiding goals are set, is dangerous. Democratic politics, we argue, involves a kind of learning that is incompatible with the learning and optimization performed by algorithmic systems.
Do large language models (LLMs) – such as ChatGPT-3.5 Turbo, ChatGPT-4.0, and Gemini 1.0 Pro, and DeepSeek-R1 – simulate human behavior in the context of the Prisoner’s Dilemma (PD) game with varying stake sizes? Through a replication of Yamagishi et al. (2016) ‘Study 2,’ we investigate this question, examining LLM responses to different payoff stakes and the influence of stake order on cooperation rates. We find that LLMs do not mirror the inverse relationship between stake size and cooperation found in the study. Rather, some models (DeepSeek-R1 and ChatGPT-4.0) almost wholly defect, while others (ChatGPT-3.5 Turbo and Gemini 1.0 Pro) mirror human behavior only under very specific circumstances. LLMs demonstrate sensitivity to framing and order effects, implying the need for cautious application of LLMs in behavioral research.
Governments across the world are leveraging artificial intelligence (AI) to render services to citizens. Emerging economies are not left behind in this transformation but remain a gaping distance behind in their integration into public-sector service delivery compared to the private sector. To ensure the effective integration of AI services by government agencies to serve citizens, it is necessary to understand the constellation of factors driving user adoption of AI. Therefore, this study answers the question: how can government-initiated AI services be successfully accepted by citizens? Leveraging non-probability sampling, a snowball sample of 245 tertiary student-workers from across Ghana was surveyed to solicit their knowledge, attitudes, readiness, and use intentions towards AI-enabled government services. The data were analysed using FsQCA and complemented by PLS-SEM to confirm the findings. The findings reveal four unique configurations, summarised into two broad groups; AI enthusiasts and AI sceptics that drive AI adoption in government services. Positive readiness factors, such as knowledge of AI and optimism towards AI, characterise AI enthusiasts. In contrast, those described as AI sceptics still adopt government AI services despite their reservations and general distrust. AI sceptics are a delicate group that sit at the boundary between adoption and rejection, requiring special attention and strategies to orient them towards adoption. The study recommends effective education and trust-building strategies to foster AI adoption in government services. The findings are essential for driving the efficient implementation of AI-enabled services among working-class citizens in emerging economies.
Ultra-processed foods (UPF), defined using frameworks such as NOVA, are increasingly linked to adverse health outcomes, driving interest in ways to identify and monitor their consumption. Artificial intelligence (AI) offers potential, yet its application in classifying UPF remains underexamined. To address this gap, we conducted a scoping review mapping how AI has been used, focusing on techniques, input data, classification frameworks, accuracy and application. Studies were eligible if peer-reviewed, published in English (2015–2025), and they applied AI approaches to assess or classify UPF using recognised or study-specific frameworks. A systematic search in May 2025 across PubMed, Scopus, Medline and CINAHL identified 954 unique records with eight ultimately meeting the inclusion criteria; one additional study was added in October following an updated search after peer review. Records were independently screened and extracted by two reviewers. Extracted data covered AI methods, input types, frameworks, outputs, validation and context. Studies used diverse techniques, including random forest classifiers, large language models and rule-based systems, applied across various contexts. Four studies explored practical settings: two assessed consumption or purchasing behaviours, and two developed substitution tools for healthier options. All relied on NOVA or modified versions to categorise processing. Several studies reported predictive accuracy, with F1 scores from 0·86 to 0·98, while another showed alignment between clusters and NOVA categories. Findings highlight the potential of AI tools to improve dietary monitoring and the need for further development of real-time methods and validation to support public health.
In a global landscape increasingly shaped by technology, artificial intelligence (AI) is emerging as a disruptive force, redefining not only our daily lives but also the very essence of governance. This Element delves deeply into the intricate relationship between AI and the policy process, unraveling how this technology is reshaping the formulation, implementation, and advice of public policies, as well as influencing the structures and actors involved. Policy science was based on practice knowledge that guided the actions of policymakers. However, the rise of AI introduces an unprecedented sociotechnical reengineering, changing the way knowledge is produced and used in government. Artificial intelligence in public policy is not about transferring policy to machines but about a fundamental change in the construction of knowledge, driven by a hybrid intelligence that arises from the interaction between humans and machines.
Answers to the question 'what is medical progress?' have always been contested, and any one response is always bound up with contextual ideas of personhood, society, and health. However, the widely held enthusiasm for medical progress escapes more general critiques of progress as a conceptual category. From the intersection of intellectual history, philosophy, and the medical humanities, Vanessa Rampton sheds light on the politics of medical progress and how they have downplayed the tensions between individual and social goods. She examines how a shared consensus about its value gives medical progress vast political and economic capital, revealing who benefits, who is left out, and who is harmed by this narrative. From ancient Greece to artificial intelligence, exploring the origins and ethics of different visions of progress offers valuable insight into how we can make them more meaningful in future. This title is also available as open access on Cambridge Core.
The study aimed to analyse the European experience of investigating criminal offences in the field of official activity and the peculiarities of its adaptation to the Ukrainian context. The study employed a combination of case study methods, formal legal analysis, content analysis, comparative legal analysis, contextual analysis and PESTEL (political, economic, social, technological, environmental and legal) analysis. The analysis of international experience was conducted in the context of European Union member states that have successfully established effective systems for investigating crimes in the public sector, including Germany, France and Poland. The study found that the approaches and strategies implemented in Ukraine have several shortcomings that significantly reduce the effectiveness of criminal investigations, including a widening gap between the number of registered offences and the number of notices of suspicion served. The reason for the identified discrepancy is the lack of coordination between the subjects of criminal investigations, as well as the lack of transparency of the investigation process and accountability of the parties involved. To overcome these shortcomings, the study recommended adapting the German experience in the field of round-the-clock interaction between the subjects of a criminal investigation, which guarantees quick access to information and prompt permission to conduct investigative actions. Adaptation of the French experience in conducting investigations was recommended to ensure cross-control of the investigation subjects and improve the efficiency of their work. The Polish experience of utilizing electronic resources in criminal proceedings was recommended to enhance interdisciplinary cooperation among the parties involved in the investigation. Adopting the best international practices can be used to enhance the detection statistics of criminal offences and increase public confidence in the country’s system for investigating and prosecuting criminal misconduct in office.
This article examines the governance challenges of human genomic data sharing. The analysis builds upon the unique characteristics that distinguish genomic data from other forms of personal data, particularly its dual nature as both uniquely identifiable to individuals and inherently collective, reflecting familial and ethnic group characteristics. This duality informs a tripartite risk taxonomy: individual privacy violations, group-level harms, and bioterrorism threats. Examining regulatory frameworks in the European Union (EU) and China, the article demonstrates how current data protection mechanisms—primarily anonymisation and informed consent—prove inadequate for genomic data governance due to the impossibility of true anonymisation and the limitations of consent-based models in addressing the risks of such sharing. Drawing on the concept of “genomic contextualism,” the article proposes a nuanced framework that incorporates interest balancing, comprehensive data lifecycle management, and tailored technical safeguards. The objective is to protect individuals and underrepresented groups while maximising the scientific and clinical value of genomic data.
This chapter takes the distinctive materiality of the modern stage, the homely table, as a way to place two very different productions into conversation: Forced Entertainment’s Table Top Shakespeare and Annie Dorsen’s Prometheus Firebringer. Although these two productions might trace the arc from the residual (telling a story at a table using small household items) to the emergent (a dialogue between an AI-generated reconstruction of a lost Aeschylus play and a narrative composed of citations), they also dramatize an increasing absorption of the human into the apparatus of performance, a possibly fearsome absorption traced through Dorsen’s work, and touching on a range of other contemporary performances, including Mona Pirnot’s I Love You So Much I Could Die.
This article is a proof-of-concept that archaeologists can now disseminate archaeological topics to the public easily and cheaply through video games in teaching situations or in museum or heritage communication. We argue that small but realistic, interactive, and immersive closed- or open-world 3D video games about cultural heritage with unscripted (but guardrailed) oral conversation can now be created by beginners with free software such as Unreal Engine, Reality Capture, and Convai. Thus, developing tailor-made “archaeogames” is now becoming extremely accessible, empowering heritage specialists and researchers to control audiovisual dissemination in museums and education. This unlocks new uses for 3D photogrammetry, currently mostly used for documentation, and could make learning about the past more engaging for a wider audience. Our case study is a small game with two levels, one built around 3D-scanned Neolithic long dolmens in a forest clearing and an archaeologist and a prehistoric person, who are both conversational AI characters. We later added a more open level with autonomous animals, a meadow, and a cave with a shaman guiding the player around specific cave paintings. We tested the first level on players from different backgrounds whose feedback showed great promise. Finally, we discuss ethical issues and future perspectives for this format.
Modern Elections can be conceived as a socio-technical system, as the electoral process in many ways relies on technological solutions: voter information, identification and registration, collecting, verifying and counting the votes – in some countries these steps are conducted by using innovative technologies. But how do those devices and processes actually become part of the official legislation and can finally deployed during this sensitive and important democratic procedure? Over time, the State of California has developed a robust regulatory ecosystem for integrating innovative technology into the electoral process and is also able to change and modernize its rules and regulations. Although technologies currently used are more static, hardware-based and usually do not include algorithmic systems, the overall structure of the process may also function as a blueprint for regulating more dynamic algorithm-based or even AI-based technologies.