To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 9 draws on the evidence outlined earlier in the book to evaluate a range of possible legal interventions. Structured according to the five potential equality objectives outlined earlier, the measures include steps to increase the visibility of people with disfigurements in daily life, methods of motivating employers to become appearance-inclusive and changes to influential institutions outside the employment context. They also include a range of legislative reforms to replace the severe disfigurement provision with a better remedial mechanism, such as the creation of a new protected characteristic of disfigurement or the reformulation of the definition of disability.
AI-based autocontouring products claim to be able to segment organs with accuracy comparable to humans. We compare the geometric and dosimetric performance of three AI-based autocontouring packages (Autocontour 2.5.6, (“RF”); Annotate 2.3.1, (“TP”) and RT-Mind_AI 1.0, (“MM”)) in the head and neck region.
Methods:
We generated 14 organ at risk (OAR) autocontours on 13 computed tomography (CT) image sets. They were compared with clinical (human-generated) contours. The geometric differences were quantified by calculating Dice coefficients and Hausdorff distances. The autocontours were compared visually with the clinical controus by an expert physician. The autocontour sets were also ranked for accuracy by two physicians. The dosimetric effects were evaluated by recalculating treatment plans on the autocontoured CT sets.
Results:
RF and TP slightly outperformed MM in geometric metrics (the percentage of OARs having mean Dice coefficients > 0.7 was RF 57.1 %, TP 64.3 % and MM 50.0%). The physician judged RF and TP contours to be more anatomically accurate, on average, than the manual contours (manual contour mean accuracy score 2.49, RF 2.28, MM 3.24, TP 1.93). The mean scores given to the autocontours by the two physicians were better for RF and TP, compared to MM (RF 1.86, MM 2.36, TP 1.77). The dosimetric differences were similar for all three programs and were not strongly correlated with the geometric differences.
Conclusions:
The performance of the three autocontouring packages in the head and neck region is similar, with TP and RF slightly outperforming MM. The correlation between geometric and dosimetric metrics is not strong, and dosimetric evaluation is therefore recommended before clinical use of autocontouring software.
This Element brings work from the philosophy of technology into conversation with media, religion, culture studies, and work in digital religion studies to explore examples of how popular media and emerging technologies are increasingly framed and understood through a distinct range of spiritual myths, metaphors, images, and representations of God. Working with three case studies about how internet memes, popular films, and media coverage of public philosophy link ideas about God and technology, this Element draws attention to common conceptions that describe a perceived relationship between religion and technology today. It synthesizes these discussions and categories and presents them in four distinct models, showing a range of ways in which the relationship between God and technology is commonly depicted. The Element seeks to create a platform for scholarly study and critical discourse on technology's religious and spiritual representation in digital and emerging media cultures and contexts through this work.
At the London Tech Week event in early June, Nvidia CEO Jensen Huang praised the UK as the ‘envy of the world’ when it comes to AI researchers, but he also criticised it as the largest AI ecosystem in the world without its own infrastructure. The criticism is somewhat self-serving: when the UK does get around to building out that infrastructure, it’s certain to consist largely of chips sold by Huang’s company. It’s also unsurprising: Huang has been pitching the idea of ‘sovereign AI’ since at least 2023, conscious that nation states are the next deep pockets to target after the hyperscalers and generously funded model builders. In a world where the only real contenders in the race for AI supremacy are the US and China, we look at how the pursuit of AI sovereignty is playing out across the rest of the planet.
Recent developments in artificial intelligence (AI) in general, and Generative AI (GenAI) in particular, have brought about changes across the academy. In applied linguistics, a growing body of work is emerging dedicated to testing and evaluating the use of AI in a range of subfields, spanning language education, sociolinguistics, translation studies, corpus linguistics, and discourse studies, inter alia. This paper explores the impact of AI on applied linguistics, reflecting on the alignment of contemporary AI research with the epistemological, ontological, and ethical traditions of applied linguistics. Through this critical appraisal, we identify areas of misalignment regarding perspectives on knowing, being, and evaluating research practices. The question of alignment guides our discussion as we address the potential affordances of AI and GenAI for applied linguistics as well as some of the challenges that we face when employing AI and GenAI as part of applied linguistics research processes. The goal of this paper is to attempt to align perspectives in these disparate fields and forge a fruitful way ahead for further critical interrogation and integration of AI and GenAI into applied linguistics.
The advent of new technologies, particularly artificial intelligence (AI), has expanded the array of options and enhanced performance in addressing biothreats. This article provides a comprehensive overview of the specific applications of AI in addressing biothreats, aiming to inform and enhance future practices. Research indicates that AI has significantly contributed to infectious disease surveillance and emergency responses, as well as bioterrorism mitigation; despite its limitations, it merits ongoing attention for further study and exploration. The effective deployment of next-generation AI in mitigating biothreats will largely hinge on our ability to engage in continuous experiential learning, acquire high-quality data, refine algorithms, and iteratively update practices. Meanwhile, it is essential to assess the operational risks associated with AI in the context of biothreats and develop robust solutions to mitigate potential risks.
From an “infrastructural gaze,” this chapter examines the penetration of artificial intelligence (AI) in capital markets as a blend of continuity and change in finance. The growing infrastructural dimension of AI originates first from the evolution of algorithmic trading and governance, and second, from its ascent as a “general-purpose technology” within the financial domain. The text discusses the consequences of this “infrastructuralization” of financial AI, considering the micro–macro tension typical of capital accumulation and crisis dynamics. Challenging the commonly espoused notion of AI as a stabilizing force, the analysis underscores its connections with volatile, crisis-prone financialized dynamics. It concludes by outlining potential consequences (unpredictability, operational inefficiency, complexity, further concentration) and (systemic) risks arising from the emergence of AI as a “new” financial infrastructure, particularly those related to biases in data and data commodification, lack of explanation of underlying models, algorithmic collusion, and network effects. The text asserts that a thorough understanding of these hazards can be attained by adopting a perspective that considers the macro/meso/micro connections inherent in infrastructures.
The complexity involved in developing and deploying artificial intelligence (AI) systems in high-stakes scenarios may result in a “liability gap,” under which it becomes unclear who is responsible when things go awry. Scholarly and policy debates about the gap and its potential solutions have largely been theoretical, with little effort put into understanding the general public’s views on the subject. In this chapter, we present two empirical studies exploring laypeople’s perceptions of responsibility for AI-caused harm. First, we study the proposal to grant legal personhood to AI systems and show that it may conflict with laypeople’s policy preferences. Second, we investigate how people divide legal responsibility between users and developers of machines in a variety of situations and find that, while both are expected to pay legal damages, laypeople anticipate developers to bear the largest share of the liability in most cases. Our examples demonstrate how empirical research can help inform future AI regulation and provide novel lines of research to ensure that this transformative technology is regulated and deployed in a more democratic manner.
The adoption of AI is pervasive, often operating behind the scenes and influencing decisions without our explicit awareness. It impacts different aspects of our lives, from personalized recommendations to crucial determinations like hiring decisions or credit approvals. Yet, even to their developers, AI algorithms’ opacity raises concerns about fairness. The biases inherent in our data further complicate matters, as current AI systems often lack moral or logical judgment, relying solely on predictive outputs derived from learned data patterns. Efforts to address fairness in AI models face significant challenges, as different definitions of fairness can lead to conflicting outcomes. Despite attempts to mitigate biases and optimize fairness criteria, achieving a universal and satisfactory solution remains elusive. The multidimensional nature of fairness, with its roots in philosophy and evolving concepts in organizational justice, underscores the complexity of the task. Technology is inherently political, shaped by various societal factors and human biases. Recognizing this, stakeholders must engage in nuanced discussions about the types of fairness relevant in specific contexts and the potential trade-offs involved. Just as in other spheres of decision-making, navigating trade-offs is inevitable, requiring a flexible approach informed by diverse perspectives.
This study acknowledges that achieving fairness in AI is not about prescribing a singular definition or solution but adapting to evolving needs and values. Embracing ambiguity and tension in decision-making can lead to more inclusive outcomes. An interdisciplinary examination of application-specific and consensus-driven frameworks is adopted to consider fairness in AI. By evaluating factors such as application nuances, procedural frameworks, and stakeholder dynamics, this study demonstrates the framework’s expansive potential applicability in understanding and operationalizing fairness by the way of two illustrations.
History is littered with unfulfilled promises that emerging technologies – from radios to televisions, and from computers to mobile phones – would completely transform teaching and learning. Now the same promises are being made of generative artificial intelligence (AI). This presentation argues that we should not be focusing on educational revolution, but instead on educational evolution. Education is a complex social, cultural, and political endeavour, serving multiple purposes and multiple stakeholders, and technology is just one of many elements in this large ecosystem.
Focusing on the context of language teaching and learning, this presentation discusses what has changed technologically, and suggests what could and should change educationally. It shows that ChatGPT and a range of other generative AI tools can contribute to language and literacy development in a number of ways, but that we need to be wary of their pedagogical, social, and environmental risks. Educators must develop the AI literacy necessary to take a more nuanced view of generative AI, and we must help our students to do the same.
This paper is based on a keynote presentation delivered at the English Australia Conference in Perth, Australia, on 12 September 2024, with some elaborations for the written version alongside minor updates to reflect more recent developments and publications.
The final chapter serves to draw the various strands of the book together, surveying what has been discovered, and expanding on the fundamental arguments of the book. It therefore begins with an analysis of Pinterest, which stands as an emblem of all that literacy means in postdigital times, whether that be sophisticated multimodal practices, durational time, or algorithmic logic. Looking back over the screen lives discussed in the book, including those of the crescent voices and of Samuel Sandor, this chapter crystallizes the personal take on screen lives that the book offers, reiterating the need to ‘undo the digital’ and find the human in, on, with, at, and against screens. It also presents some of the problems scholarship must meet, such as digital inequalities, whether that be in terms of time, awareness, or skill with technology. However, despite the considerable negative forces at work in screen lives which the book has taken care to unravel, this concluding chapter advocates ‘taking the higher ground’ and enacting wonder in interactions with screens.
Moving on to AI and algorithms, the penultimate chapter of the book focuses on the importance of vigilance and criticality when engaging with screens. The influence of AI and algorithms on day-to-day interactions, their inherent potential to steal content, and their tendencies to stir up racism and intolerance all mean that it is becoming increasingly vital for researchers, policymakers, and educators to understand these technologies. This chapter argues that being informed and armed with meta-awareness about AI and algorithmic processes is now key to critical digital literacy. In arguing towards this conclusion, it starts by presenting scholarly perspectives and research on AI and literacy, before turning to Ruha Benjamin and Safiya Umoja Noble’s research into racism in AI and algorithms, including Benjamin’s concept of the ‘New Jim Code’. Crescent voices are invoked to contextualize these ideas in real world experiences with algorithmic culture, where encounters with blackboxed practices and struggles to articulate experiences of algorithmic patterns serve to demonstrate further the importance of finding new constructs for critical literacy that encompass algorithmic logic.
The implementation of the General Data Protection Regulation (GDPR) in the EU, rather than the regulation itself, is holding back technological innovation. The EU’s data protection governance architecture is complex, leading to contradictory interpretations among Member States. This situation is prompting companies of all kinds to halt the deployment of transformative projects in the EU. The case of Meta is paradigmatic: both the UK and the EU broadly have the same regulation (GDPR), but the UK swiftly determined that Meta could train its generative AI model using first-party public data under the legal basis of legitimate interest, while in the EU, the European Data Protection Board (EDPB) took months to issue an Opinion that national authorities must still interpret and implement individually, leading to legal uncertainty. Similarly, the case of Deepseek has demonstrated how some national data protection authorities, such as the Italian Garante, have moved to ban the AI model outright, while others have opted for investigations. This fragmented enforcement landscape exacerbates regulatory uncertainty and hampers EU’s competitiveness, particularly for startups, which lack the resources to navigate an unpredictable compliance framework. For the EU to remain competitive in the global AI race, strengthening the EDPB’s role is essential.
Not a day goes by without a new story on the perils of technology. We hear of increasingly clever machines that surpass human capability and comprehension, of tech billionaires imploring each other to stop the ‘out-of-control race’ to produce the most powerful artificial intelligence which poses ‘profound risks to society’, we hear of genetic technologies capable of altering the human genome in ways we cannot predict and a future two-tier humanity consisting of those of us who are genetically enhanced and those who are not. How can we respond to these stories? What should we do politically? By way of exploring these questions (using the UK as the primary example of context), I want to move beyond the usual arguments and legal devices that serve to identify tech developers, and users, as being at fault for individual acts of wrongdoing, recklessness, incompetence or negligence, and ask instead how we might address the broader structural dynamics intertwined with the increasing use of AI and Repro-tech. My argument will be that to take a much sharper structural perspective on these transformative technologies is a vital requirement of contemporary politics.
Humanity’s increasing reliance on AI and robotics is driven by compelling narratives of efficiency in which the human is a poor substitute for the extraordinary computational power of machine learning, the creative competences of generative AI as well as the speed, accuracy and consistency of automation in so many spheres of human activity. Indeed, AI is increasingly becoming the core technological foundation of many contemporary societies. Most thinking on how to manage the downside risks to humanity of this seismic societal shift is set out in a direct fault-based relationship such as the innovative EU AI Act which is by far the most comprehensive political attempt to locate (or deter) those directly responsible for AI-generated harm. I argue that while such approaches are vital for combating injustice exacerbated by AI and robotics, too little thought goes into political approaches to the structural dynamics of AI’s impact on society. By way of example, I examine the UK ‘pro-innovation’ approach to AI governance and explore how it fails to address the structural injustices inherent in increasing AI usage.
The third industrial revolution saw the creation of computers and an increased use of technology in industry and households. We are now in the fourth industrial revolution: cyber, with advances in artificial intelligence, automation and the internet of things. The third and fourth revolutions have had a large impact on health care, shaping how health and social care are planned, managed and delivered, as well as supporting wellness and the promotion of health. This growth has seen the advent of the discipline of health informatics with several sub-specialty areas emerging over the past two decades. Informatics is used across primary care, allied health, community care and dentistry, with technology supporting the primary health care continuum. This chapter explores the development of health informatics as a discipline and how health care innovation, technology, governance and the workforce are supporting digital health transformation.
How exactly is technology transforming us and our worlds, and what (if anything) can and should we do about it? Heidegger already felt this philosophical question concerning technology pressing in on him in 1951, and his thought-full and deliberately provocative response is still worth pondering today. What light does his thinking cast not just on the nuclear technology of the atomic age but also on more contemporary technologies such as genome engineering, synthetic biology, and the latest advances in information technology, so-called “generative AIs” like ChatGPT? These are some of the questions this book addresses, situating the latest controversial technologies in the light of Heidegger's influential understanding of technology as an historical mode of ontological disclosure. In this way, we seek to take the measure of Heidegger's ontological understanding of technology as a constellation of intelligibility with an important philosophical heritage and a dangerous but still promising future.
The promise of artificial intelligence (AI) is increasingly invoked to ‘revolutionize’ practices of global security governance, including in the domain of border control. Legal scholarship tends to confront these changes by foregrounding the rule of law challenges associated with nascent forms of governance by data, and by imposing new regulatory standards. Yet, little is known about how these algorithmic systems are already reconfiguring legal norms and processes, while generating novel security techniques and practices for knowing and governing “risk” before the border. Exploring these questions, this article makes three important contributions to the literature. On an empirical level, it provides an original socio-legal study of the processes constructing and implementing Cerberus – an AI-based risk-analysis platform deployed by the UK Home Office. This analysis provides unique insight into the institutional frictions, legal mediations and emergent governance formations involved in the introduction of this algorithmic bordering system. On a methodological level, the article directly engages with the focus on ‘legal infrastructures’ in this special issue. It uses an original approach (infra-legalities) which follows how legal and infrastructural elements are relationally and materially tied together in practice. Rather than trying to conceptually settle the relation between law and infrastructure – or qualifying law as a sui generis infrastructure – the article traces incipient modes of governmentality and regulatory ordering in which both legal and infrastructural elements are metabolized. In its account of Cerberus, the article analyzes this emergent composition as a dispositif of speculative suspicion. Finally, on a normative and political level, the article signals the significant stakes involved in this algorithmic enactment of risk. It shows how prevailing regulatory tropes revolving around ‘debiasing’ and retention of a ‘human in the loop’ offer a limited register of remedy, and work to amplify the reach of Cerberus. We conclude with reflections on critiquing algorithmic systems like Cerberus through the emergent infrastructural relations they enact.
AI brings risks but also opportunities for consumers. When it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI also poses major challenges. This chapter discusses both the challenges and opportunities of AI in the consumer context (Section 10.2 and 10.3) and provides a brief overview of some of the relevant consumer protection instruments in the EU legal order (Section 10.4). A case study on dark patterns illustrates the shortcomings of the current consumer protection framework more concretely (Section 10.5).
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.