We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The structure of society is heavily dependent upon its means of producing and distributing information. As its methods of communication change, so does a society. In Europe, for example, the invention of the printing press created what we now call the public sphere. The public sphere, in turn, facilitated the appearance of ‘public opinion’, which made possible wholly new forms of politics and governance, including the democracies we treasure today. Society is presently in the midst of an information revolution. It is shifting from analogue to digital information, and it has invented the Internet as a nearly universal means for distributing digital information. Taken together, these two changes are profoundly affecting the organization of our society. With frightening rapidity, these innovations have created a wholly new digital public sphere that is both virtual and pervasive.
A reflective analysis is presented on the potential added value that actuarial science can contribute to the field of health technology assessment. This topic is discussed based on the experience of several experts in health actuarial science and health economics. Different points are addressed, such as the role of actuarial science in health, actuarial judgment, data inputs and their quality, modeling methodologies and the use of decision-analytic models in the age of artificial intelligence, and the development of innovative pricing and payment models.
Large Language Models (LLMs) raises challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. Regarding LLMs, the main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of risks and harms generated by means of LLMs. The epistemological approach examines how LLMs produce outputs, information, knowledge, and a representation of reality in ways that differ from those followed by human beings. To face the impact of LLMs, our paper contends that the epistemological approach should be examined as a priority: identifying risks and opportunities of LLMs also depends on considering how this form of artificial intelligence works from an epistemological point of view. To this end, our analysis compares the epistemology of LLMs with that of law, in order to highlight at least five issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence and (v) relation to truth and accuracy. The epistemological analysis of these issues, preliminary to the normative one, lays the foundations to better frame challenges and opportunities arising from the use of LLMs.
The paper examines the legal regulation and governance of “generative artificial intelligence” (AI), “foundation AI,” “large language models” (LLMs), and the “general-purpose” AI models of the AI Act. Attention is drawn to two potential sorcerer’s apprentices, namely, in the spirit of J. W. Goethe’s poem, people who were unable to control a situation they created. Focus is on developers and producers of technologies, such as LLMs that bring about risks of discrimination and information hazards, malicious uses and environmental harms; furthermore, the analysis dwells on the normative attempt of European Union legislators to govern misuses and overuses of LLMs with the AI Act. Scholars, private companies, and organisations have stressed limits of such normative attempt. In addition to issues of competitiveness and legal certainty, bureaucratic burdens and standard development, the threat is the over-frequent revision of the law to tackle advancements of technology. The paper illustrates this threat since the inception of the AI Act and recommends some ways in which the law has not to be continuously amended to address the challenges of technological innovation.
Recent advancements in Earth system science have been marked by the exponential increase in the availability of diverse, multivariate datasets characterised by moderate to high spatio-temporal resolutions. Earth System Data Cubes (ESDCs) have emerged as one suitable solution for transforming this flood of data into a simple yet robust data structure. ESDCs achieve this by organising data into an analysis-ready format aligned with a spatio-temporal grid, facilitating user-friendly analysis and diminishing the need for extensive technical data processing knowledge. Despite these significant benefits, the completion of the entire ESDC life cycle remains a challenging task. Obstacles are not only of a technical nature but also relate to domain-specific problems in Earth system research. There exist barriers to realising the full potential of data collections in light of novel cloud-based technologies, particularly in curating data tailored for specific application domains. These include transforming data to conform to a spatio-temporal grid with minimum distortions and managing complexities such as spatio-temporal autocorrelation issues. Addressing these challenges is pivotal for the effective application of Artificial Intelligence (AI) approaches. Furthermore, adhering to open science principles for data dissemination, reproducibility, visualisation, and reuse is crucial for fostering sustainable research. Overcoming these challenges offers a substantial opportunity to advance data-driven Earth system research, unlocking the full potential of an integrated, multidimensional view of Earth system processes. This is particularly true when such research is coupled with innovative research paradigms and technological progress.
Edge AI is the fusion of edge computing and artificial intelligence (AI). It promises responsiveness, privacy preservation, and fault tolerance by moving parts of the AI workflow from centralized cloud data centers to geographically dispersed edge servers, which are located at the source of the data. The scale of edge AI can vary from simple data preprocessing tasks to the whole machine learning stack. However, most edge AI implementations so far are limited to urban areas, where the infrastructure is highly dependable. This work instead focuses on a class of applications involved in environmental monitoring in remote, rural areas such as forests and rivers. Such applications have additional challenges, including failure proneness and access to the electricity grid and communication networks. We propose neuromorphic computing as a promising solution to the energy, communication, and computation constraints in such scenarios and identify directions for future research in neuromorphic edge AI for rural environmental monitoring. Proposed directions are distributed model synchronization, edge-only learning, aerial networks, spiking neural networks, and sensor integration.
With recent leaps in large language model technology, conversational AI offer increasingly sophisticated interactions. But is it fair to say that they can offer authentic relationships, perhaps even assuage the loneliness epidemic? In answering this question, this essay traces the history of AI authenticity, historically shaped by cultural imaginations of intelligent machines and human communication. The illusion of human-like interaction with AI has existed since at least the 1960s, when the term “Eliza effect’ was named after the first chatbot Eliza. Termed a “crisis of authenticity” by sociologist Sherry Turkle, the Eliza effect has stood for fears that AI interactions can undermine real human connections and leave users vulnerable to manipulation. More recently, however, researchers have begun investigating less anthropomorphic definitions of authenticity. The expectation - and perhaps fantasy - of authenticity stems, in turn, from a much longer history of technologically mediated communications, dating back to the invention of the telegraph in the nineteenth century. Read through this history, the essay concludes that AI relationships might not mimic human interactions but must instead acknowledge the artifice of AI, offering a new form of companionship in our mediated, often lonely, times.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, the chapter sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU?s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights. Risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, the chapter then examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the key areas demanding further clarifications in order to fill the remedial gaps.
This Element highlights the employment within archaeology of classification methods developed in the field of chemometrics, artificial intelligence, and Bayesian statistics. These run in both high- and low-dimensional environments and often have better results than traditional methods. Instead of a theoretical approach, it provides examples of how to apply these methods to real data using lithic and ceramic archaeological materials as case studies. A detailed explanation of how to process data in R (The R Project for Statistical Computing), as well as the respective code, are also provided in this Element.
Several African countries are developing artificial intelligence (AI) strategies and ethics frameworks with the goal of accelerating responsible AI development and adoption. However, many of these governance actions are emerging without consideration for their suitability to local contexts, including whether the proposed policies are feasible to implement and what their impact may be on regulatory outcomes. In response, we suggest that there is a need for more explicit policy learning, by looking at existing governance capabilities and experiences related to algorithms, automation, data, and digital technology in other countries and in adjacent sectors. From such learning, it will be possible to identify where existing capabilities may be adapted or strengthened to address current AI-related opportunities and risks. This paper explores the potential for learning by analysing existing policy and legislation in twelve African countries across three main areas: strategy and multi-stakeholder engagement, human dignity and autonomy, and sector-specific governance. The findings point to a variety of existing capabilities that could be relevant to responsible AI; from existing model management procedures used in banking and air quality assessment to efforts aimed at enhancing public sector skills and transparency around public–private partnerships, and the way in which existing electronic transactions legislation addresses accountability and human oversight. All of these point to the benefit of wider engagement on how existing governance mechanisms are working, and on where AI-specific adjustments or new instruments may be needed.
This paper questions how the drive toward introducing artificial intelligence (AI) in all facets of life might endanger certain African ethical values. It argues in the affirmative that indeed two primary values that are prized in nearly all versions of sub-Saharan African ethics (available in the literature) might sit in direct opposition to the fundamental motivation of corporate adoption of AI; these values are Afro-communitarianism grounded on relationality, and human dignity grounded on a normative conception of personhood. This paper offers a unique perspective on AI ethics from the African place, as there is little to no material in the literature that discusses the implications of AI on African ethical values. The paper is divided into two broad sections that are focused on (i) describing the values at risk from AI and (ii) showing how the current use of AI undermines these said values. In conclusion, I suggest how to prioritize these values in working toward the establishment of an African AI ethics framework.
Brain–computer interfaces (BCIs) exemplify a dual-use neurotechnology with significant potential in both civilian and military contexts. While BCIs hold promise for treating neurological conditions such as spinal cord injuries and amyotrophic lateral sclerosis in the future, military decisionmakers in countries such as the United States and China also see their potential to enhance combat capabilities. Some predict that U.S. Special Operations Forces (SOF) will be early adopters of BCI enhancements. This article argues for a shift in focus: the U.S. Special Operations Command (SOCOM) should pursue translational research of medical BCIs for treating severely injured or ill SOF personnel. After two decades of continuous military engagement and on-going high-risk operations, SOF personnel face unique injury patterns, both physical and psychological, which BCI technology could help address. The article identifies six key medical applications of BCIs that could benefit wounded SOF members and discusses the ethical implications of involving SOF personnel in translational research related to these applications. Ultimately, the article challenges the traditional civilian-military divide in neurotechnology, arguing that by collaborating more closely with military stakeholders, scientists can not only help individuals with medical needs, including servicemembers, but also play a role in shaping the future military applications of BCI technology.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
The EUMigraTool (EMT) provides short-term and mid-term predictions of asylum seekers arriving in the European Union, drawing on multiple sources of public information and with a focus on human rights. After 3 years of development, it has been tested in real environments by 17 NGOs working with migrants in Spain, Italy, and Greece.
This paper will first describe the functionalities, models, and features of the EMT. It will then analyze the main challenges and limitations of developing a tool for non-profit organizations, focusing on issues such as (1) the validation process and accuracy, and (2) the main ethical concerns, including the challenging exploitation plan when the main target group are NGOs.
The overall purpose of this paper is to share the results and lessons learned from the creation of the EMT, and to reflect on the main elements that need to be considered when developing a predictive tool for assisting NGOs in the field of migration.
In the mid to late 19th century, much of Africa was under colonial rule, with the colonisers exercising power over the labour and territory of Africa. However, as much as Africa has predominantly gained independence from traditional colonial rule, another form of colonial rule still dominates the African landscape. This similitude of these different forms of colonialism is found in the power dominance exhibited by Western technological corporations, just like the traditional colonialists. In this digital age, digital colonialism manifests in Africa through the control and ownership of critical digital infrastructure by foreign entities, leading to unequal data flow and asymmetrical power dynamics. This usually occurs under the guise of foreign corporations providing technological assistance to the continent.
By drawing references from the African continent, this article examines the manifestations of digital colonialism and the factors that aid its occurrence on the continent. It further explores the manifestations of digital colonialism in technologies such as Artificial Intelligence (AI) while analysing the occurrence of data exploitation on the continent and the need for African ownership in cultivating the digital future of the African continent. The paper also recognises the benefits linked to the use of AI and makes a cautious approach toward the deployment of AI tools in Africa. It then concludes by recommending the implementation of laws, regulations, and policies that guarantee the inclusiveness, transparency, and ethical values of new technologies, with strategies toward achieving a decolonised digital future on the African continent.
Generative artificial intelligence (GenAI) has gained significant popularity in recent years. It is being integrated into a variety of sectors for its abilities in content creation, design, research, and many other functionalities. The capacity of GenAI to create new content—ranging from realistic images and videos to text and even computer code—has caught the attention of both the industry and the general public. The rise of publicly available platforms that offer these services has also made GenAI systems widely accessible, contributing to their mainstream appeal and dissemination. This article delves into the transformative potential and inherent challenges of incorporating GenAI into the domain of judicial decision-making. The article provides a critical examination of the legal and ethical implications that arise when GenAI is used in judicial rulings and their underlying rationale. While the adoption of this technology holds the promise of increased efficiency in the courtroom and expanded access to justice, it also introduces concerns regarding bias, interpretability, and accountability, thereby potentially undermining judicial discretion, the rule of law, and the safeguarding of rights. Around the world, judiciaries in different jurisdictions are taking different approaches to the use of GenAI in the courtroom. Through case studies of GenAI use by judges in jurisdictions including Colombia, Mexico, Peru, and India, this article maps out the challenges presented by integrating the technology in judicial determinations, and the risks of embracing it without proper guidelines for mitigating potential harms. Finally, this article develops a framework that promotes a more responsible and equitable use of GenAI in the judiciary, ensuring that the technology serves as a tool to protect rights, reduce risks, and ultimately, augment judicial reasoning and access to justice.
In this article, I will consider the moral issues that might arise from the possibility of creating more complex and sophisticated autonomous intelligent machines or simply artificial intelligence (AI) that would have the human capacity for moral reasoning, judgment, and decision-making, and (the possibility) of humans enhancing their moral capacities beyond what is considered normal for humanity. These two possibilities raise an urgency for ethical principles that could be used to analyze the moral consequences of the intersection of AI and transhumanism. In this article, I deploy personhood-based relational ethics grounded on Afro-communitarianism as an African ethical framework to evaluate some of the moral problems at the intersection of AI and transhumanism. In doing so, I will propose some Afro-ethical principles for research and policy development in AI and transhumanism.