To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accurate radiation dose measurement is crucial for medical intervention and protective actions. Biological dose assessment directly measures radiation-induced molecular and physiological changes, providing information about the absorbed dose and potential health risks. The Korea Institute of Radiological and Medical Sciences (KIRAMS) has performed biological dosimetry using cytogenetic assays since 2010. These assays are used for individual dose estimation in various situations, including occupational exposure, accidental radiation exposure, and health risk assessment of people living near nuclear power plants in Korea. Recent advancements in biological dose assessment methods, such as automated scoring and high-throughput assays, have improved efficiency and enabled more people to undergo dose assessment. The KIRAMS continuously explores new methods and targets for biodosimetry to enhance dose assessment capabilities and can contribute to expand the biological dose assessment capacity with the expertise and facilities, responding to large-scale accidents of radiation exposure in the world.
Chapter 10 predicts the “future” of chilling effects – which today looks darker and more dystopian than ever in light of the proliferation of new forms of artificial intelligence, machine learning, and automation technologies in society. The author here introduces a new term “superveillance” to explain new forms of AI-driven systems of automated legal and social norm enforcement that will likely cause mass societal chilling effects at an unprecedented scale. The author also argues how chilling effects today enable this more oppressive future and proposes a comprehensive law and public policy reforms and solutions to stop it.
This chapter examines some ways in which human agency might be affected by a transition from legal regulation to regulation by AI. To do that, it elucidates an account of agency, distinguishing it from related notions like autonomy, and argues that this account of agency is both philosophically respectable and fits common sense. With that account of agency in hand, the chapter then examines two different ways – one beneficial, one baleful – in which agency might be impacted by regulation by AI, focussing on some agency-related costs and benefits of transforming private law from its current rule-based regulatory form to an AI-enabled form of technological management. It concludes that there are few grounds to be optimistic about the effects of such a transition and good reason to be cautious.
Monsters may frighten but also fascinate us in their weird and unfamiliar ways. As Gramsci once observed, periods of radical transformation are also times of monsters. AI fits the description. It is a bewildering entity, consisting of hard - and software, depending on infrastructures that need huge amounts of energy and water. It defies clear definition, yet seeps into every corner of our lives. Big Tech warns of existential risks while pursuing Artificial General Intelligence, AGI. However, real challenges today lie in how AI threatens to substitute rather than augment human capabilities.
This essay examines the deployment of an AI-based interdisciplinary approach. It has proven spectacularly successful, as exemplified by AlphaFold2's breakthrough in protein folding. This approach operates frictionlessly, combining knowledge domains with remarkable efficiency and speed. It seems to vindicate a technocratic dream of problem-solving without the messiness and time needed for human deliberation. Yet, when this artificial interdisciplinarity enters the social world, it encounters what it seeks to eliminate: friction.
Friction, however, is not an obstacle to overcome but an essential feature of human existence. The physical world requires friction for movement; the social world needs it for creativity, conflict resolution, and meaningful cooperation. Certainly, too much friction can bring havoc, and too little can lead to a standstill. But as AI continues its co-evolutionary trajectory with humanity, we must resist the seductive promise of a frictionless world run by automated efficiency.
Instead, we need to cultivate a humanistic culture of AI interdisciplinarity - one that bridges sciences and humanities while preserving human curiosity, deliberation, and epistemic diversity. Bringing friction back means taking the time to reconsider shared goals, acknowledging conflicts, and maintaining spaces for genuine human creativity. Only by embracing friction can we ensure that AI augments rather than diminishes what makes us human.
Radical political economy focuses on capitalism's ability for reproduction. Social reproduction refers to how human beings reproduce their existence. Globalization has seen a vast expansion of surplus labor or surplus humanity. The levels of worldwide inequalities are unprecedented, as is the extent of mass deprivation and precarity. Transnational capital has turned to new forms of unpaid labor to expand accumulation, helping to generate a worldwide crisis in gender relations. A new round of global enclosures is underway that includes land grabs around the world. The TCC is turning toward greater automation in both the traditional core and the traditional periphery, suggesting an increase in the production of relative surplus value relative to absolute surplus value. The global mining industry, and the case of the Congo, illustrate these transformations. As artificial intelligence spreads, professional work and knowledge workers also face deskilling, automation, and increased precariousness. Capitalist states could ameliorate the crisis through redistribution and regulatory policies, but they are constrained by the structural power of transnational capital.
The ability to modify designs, personalize nutrition, and improve food sustainability makes 3D food printing (3DFP) an exciting emerging technology. Food materials’ complex chemistry and mechanics make it difficult to consistently print designs of different shapes. This research uses two methods to assess printed food fidelity: Manual and automated image analysis with custom-developed algorithm. Fidelity based on printed area was measured for three overhang designs (0°, 30°, and 60°) and three food ink mixtures. The manual method provided a baseline for analysis by comparing printed images with CAD images. Both methods showed consistent results with only ±3% differences in analyzing printed design areas. While the computational method offers advantages for efficiency and bias reduction, making it well-suited for fidelity assessment to assess designs.
The offshoring-fuelled growth of the Central and Eastern European business services sector gave rise to shared service centres (SSCs) – quasi-autonomous entities providing routine-intensive tasks for the central organisation. The advent of technologies such as intelligent process automation, robotic process automation, and artificial intelligence jeopardises SSCs’ employment model, necessitating workers’ skills adaptation. The study challenges the deskilling hypothesis and reveals that automation in the Polish SSCs is conducive to upskilling and worker autonomy. Drawing on 31 in-depth interviews, we highlight the negotiated nature of automation processes shaped by interactions between headquarters, SSCs, and their workers. Workers actively participated in automation processes, eliminating the most mundane tasks. This resulted in upskilling, higher job satisfaction, and empowerment. Yet, this phenomenon heavily depends upon the fact that automation is triggered by labour shortages, which limit the expansion of SSCs. This situation encourages companies to leverage the specific expertise entrenched in their existing workforce. The study underscores the importance of fostering employee-driven automation and upskilling initiatives for overall job satisfaction and quality.
Having looked at how firms develop innovations and bring them to market, and the role of entrepreneurs and states in shaping those processes, we turn now to the question of what innovations do to society. Innovations, after all, do not just concern the firms that create them. We begin at the most macro of macroscopic levels with Perez’s paper on technology bubbles, asking how societies are transformed through successive waves of technological revolution and what happens as those waves flood over society. Staying at the macroscopic perspective with Zuboff’s paper on Big Other, we look at how technological change transforms capitalist dynamics and ushers in both new logics of accumulation and new forms of exploitation. Then, we move to the question that the popular press tends to phrase as “Will robots take our jobs?” as we look at the history and future of workplace automation with Autor’s paper and Bessen’s analysis of the conditions that lead to widespread, as opposed to highly concentrated, societal gains from technology.
The chapter explores the implications of the growing integration of AI and robotics into the workforce, questioning whether it will lead to a golden age or a dystopian era. It emphasizes the transformative impact of these technologies on various industries, with robots taking over tasks from manufacturing and distribution to healthcare and caregiving. The benefits include increased productivity, efficiency, safety, flexibility, cost-savings, improved precision, and enhanced quality of life. However, concerns arise regarding potential job displacement and loss, particularly for low-skilled and low-paid workers. The passage discusses the likelihood of robots replacing up to half of all jobs in the coming decades, posing challenges for reemployment and skills adaptation. The issue of inequality is raised, highlighting that low-skilled workers may be disproportionately affected. The chapter also touches on the societal disruptions, identity crisis, and potential resistance that could emerge due to widespread automation. Various policy prescriptions are proposed to address these challenges, including investing in education, training, and reskilling, implementing a universal basic income (UBI), guaranteeing jobs, exploring job-sharing and reduced hours, and considering a tax on robot labor.
Commercial cattle slaughter operations have shown an increasing trend towards automation, with the aim being to improve animal welfare, product quality and efficiency. Several cattle slaughter plants have introduced mechanical rump pushers (RP) prior to the entrance of the stun box to reduce human-animal interaction and facilitate a smoother transition from the raceway to stun box. Presently, there are no data regarding the use of RPs in commercial slaughter environments operating at 40 cattle per hour. Therefore, this study observed normal operations at a UK slaughter plant, which has an RP installed, and assessed the level of coercion required to enter the RP, the use of the RP, cattle behaviour inside the RP and carcase bruising. The RP was used on 267 of the 815 cattle observed (32.8%) and was more likely to be used on dairy cattle and those who received a higher coercion score when entering the RP. Overall, 60 cattle (7.4%) required the highest coercion score and four (0.49%) required the use of the electric goad. Inside the RP, eleven animals slipped (1.8%) and ten vocalised (1.6%) although no incidences were directly associated with RP use. However, increased time restrained in the RP was significantly associated with more gate slams into the RP entrance gate. The use of the RP was not significantly associated with carcase bruising. These results are encouraging, and although it cannot be concluded that the presence of an RP improves cattle welfare at slaughter, use of automation within cattle slaughter facilities warrants further investigation.
This chapter addresses the implications of the 100-year-life for the future of work and the law of work. To begin with, longer lives will pose severe actuarial challenges to all existing strategies for ensuring retirement income security. At least without a dramatic (and probably unjustified) shift of social welfare expenditures into support of nonworking seniors, most people will probably have to work longer, if they are healthy and able, to generate enough income for retirement. The chapter then turns to how the law of work might have to change to accommodate longer working lives. Leaving aside the law of age discrimination (addressed in another chapter), longer working lives will recast longstanding debates over job security and will highlight the need to make work and work schedules less demanding, especially as workers age. This chapter will explore these challenges and how demographic changes will intersect with changing technology and its impact on the nature and number of jobs.
This study examines the association between firm-level investments in automation technologies and employment outcomes, drawing on a panel dataset of approximately 10,450 Italian firms. We focus on the proliferation of non-standard labour contracts introduced by labour market reforms in the 2000s, which facilitated external labour flexibility. Our findings reveal a positive relationship between automation investments and the adoption of these flexible labour arrangements. Guided by a conceptual framework, we interpret this result as evidence of complementarity between automation technologies – viewed as flexible capital – and non-standard contractual arrangements – viewed as flexible labour. This complementarity is essential for enhancing operational flexibility, a critical driver of firm performance in competitive market environments. From a policy perspective, our analysis highlights the importance of measures that protect labour without undermining the efficiency gains enabled by automation.
With the increasing accessibility of tools such as ChatGPT, Copilot, DeepSeek, Dall-E, and Gemini, generative artificial intelligence (GenAI) has been poised as a potential, research timesaving tool, especially for synthesising evidence. Our objective was to determine whether GenAI can assist with evidence synthesis by assessing its performance using its accuracy, error rates, and time savings compared to the traditional expert-driven approach.
Methods
To systematically review the evidence, we searched five databases on 17 January 2025, synthesised outcomes reporting on the accuracy, error rates, or time taken, and appraised the risk-of-bias using a modified version of QUADAS-2.
Results
We identified 3,071 unique records, 19 of which were included in our review. Most studies had a high or unclear risk-of-bias in Domain 1A: review selection, Domain 2A: GenAI conduct, and Domain 1B: applicability of results. When used for (1) searching GenAI missed 68% to 96% (median = 91%) of studies, (2) screening made incorrect inclusion decisions ranging from 0% to 29% (median = 10%); and incorrect exclusion decisions ranging from 1% to 83% (median = 28%), (3) incorrect data extractions ranging from 4% to 31% (median = 14%), (4) incorrect risk-of-bias assessments ranging from 10% to 56% (median = 27%).
Conclusion
Our review shows that the current evidence does not support GenAI use in evidence synthesis without human involvement or oversight. However, for most tasks other than searching, GenAI may have a role in assisting humans with evidence synthesis.
This chapter roots the authors' insights about automated legal guidance in a broader examination of why and how to address the democracy deficit in administrative law. As this chapter contemplates the future of agency communications, it also explores in greater detail the possibility that technological developments may allow government agencies not only to explain the law to the public using automated tools but also to automate the legal compliance obligations of individuals. While automated legal compliance raises serious concerns, recent examples reveal that it may soon become a powerful tool that agencies can apply broadly under the justifications of administrative efficiency. As this chapter argues, the lessons learned from our study of automated legal guidance are critical to maintaining values like transparency and legitimacy, as automated compliance expands as a result of perceived benefits like efficiency.
The Conclusion emphasizes the growing importance of automated legal guidance tools across government agencies. It crystalizes the insight that automated legal guidance tools reflect a trade-off between government agencies representing the law accurately and presenting it in accessible and understandable terms. While automated legal guidance tools enable agencies to reach more members of the public and provide them quick and easy explanations of the law, these quick and easy explanations sometimes obscure what the law actually is. The Conclusion acknowledges and accepts the importance of automated legal guidance to the future of governance, and, especially in light of this acknowledgement, recommends that legislators and agency officials adopt the policy recommendations presented in this book.
As Chapter 4 demonstrated, automated legal guidance often enables the government to present complex law as though it is simple without actually engaging in simplification of the underlying law. While this approach offers advantages in terms of administrative efficiency and ease of use by the public, it also causes the government to present the law as simpler than it is, leading to less precise advice and potentially inaccurate legal positions. As the use of automated legal guidance by government agencies is likely to grow in the future, a number of policy interventions are needed. This chapter offers multiple detailed policy recommendations for federal agencies that have introduced, or may introduce, chatbots, virtual assistants, and other automated tools to communicate the law to the public. Our recommendations are organized into five general categories: (1) transparency; (2) reliance; (3) disclaimers; (4) process; and (5) accessibility, inclusion, and equity.
The Introduction presents an overview of the use of automated legal guidance by government agencies. It offers examples of chatbots, virtual assistants, and other online tools in use across US federal government agencies and shows how the government is committed to expanding their application. The Introduction sets forth some of the critical features of automated legal guidance, including its tendency to make complex aspects of the law seem simple. The Introduction previews how automated legal guidance promises to increase access to complex statutes and regulations. However, the Introduction cautions that there are underappreciated costs of automated legal guidance, including that its simplification of statutes and regulations is more likely to harm members of the public who lack access to legal counsel than high-income and wealthy individuals. The Introduction provides a roadmap for the remainder of the book.
This chapter sets forth how government agencies are using artificial intelligence to automate their delivery of legal guidance to the public. The chapter first explores how many federal agencies have a duty not only to enforce the law but also to serve the public, including by explaining the law and helping the public understand how it applies. Agencies must contend with expectations that they will provide customer service experiences akin to those provided by the private sector. At the same time, government agencies lack sufficient resources. The complexity of statutes and regulations significantly compounds this challenge for agencies. As this chapter illustrates, the federal government has begun using virtual assistants, chatbots, and related technology to respond to tens of millions of inquiries from the public about the application of the law.
This chapter illuminates some of the hidden costs of the federal agencies’ use of automated legal guidance to explain the law to the public. It highlights the following features of these tools: they make statements that deviate from the formal law; they fail to provide notice to users about the accuracy and legal value of their statements; and they induce reliance in ways that impose inequitable burdens among different user populations. The chapter also considers how policymakers should weigh these costs against the benefits of automated legal guidance when contemplating whether to adopt, or increase, agencies’ use of these tools.