We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The structure of society is heavily dependent upon its means of producing and distributing information. As its methods of communication change, so does a society. In Europe, for example, the invention of the printing press created what we now call the public sphere. The public sphere, in turn, facilitated the appearance of ‘public opinion’, which made possible wholly new forms of politics and governance, including the democracies we treasure today. Society is presently in the midst of an information revolution. It is shifting from analogue to digital information, and it has invented the Internet as a nearly universal means for distributing digital information. Taken together, these two changes are profoundly affecting the organization of our society. With frightening rapidity, these innovations have created a wholly new digital public sphere that is both virtual and pervasive.
This Element offers a concise introduction to the theory and practice of narrative creativity. It distinguishes narrative creativity from ideation, divergent thinking, design thinking, brainstorming, and other current approaches to explaining and/or cultivating creativity. It explains the biological and neuroscientific origins of narrative creativity. It provides practical exercises, developed and tested in hundreds of classrooms and businesses, and validated independently by the US Army. It details how narrative creativity contributes to technological innovation, scientific progress, cultural growth, and psychological wellbeing. It describes how narrative creativity can be assessed. This title is also available as Open Access on Cambridge Core.
This paper focuses on the epistemic situation one faces when using a Large Language Model based chatbot like ChatGPT: When reading the output of the chatbot, how should one decide whether or not to believe it? By surveying strategies we use with other, more familiar sources of information, I argue that chatbots present a novel challenge. This makes the question of how one could trust a chatbot especially vexing.
This paper traces the legislative process of the EU Artificial Intelligence Act (AI Act) to provide an empirical and critical account of the choices made in its formation. It specifically focuses on the dynamics that led to increasing or lowering fundamental rights protection in the final text and their implications for fundamental rights. Adopting process-tracing methods, the paper sheds light on the institutional differences and agreements behind this landmark legislation. It then analyses the implications of political compromise for fundamental rights protection. The core message it aims to convey is to read the AI Act with its institutional setting and political context in mind. As this paper shows, the different policy aims and mandates of the three EU institutions, compounded by the unprecedented level of redrafting and the short time needed to reach a political agreement, influenced the formulation of the AI Act. Looking forward, the paper points to the role of implementation, enforcement and judicial interpretation in enhancing the protection of fundamental rights in the age of AI.
Environmental data science for spatial extremes has traditionally relied heavily on max-stable processes. Even though the popularity of these models has perhaps peaked with statisticians, they are still perceived and considered as the “state of the art” in many applied fields. However, while the asymptotic theory supporting the use of max-stable processes is mathematically rigorous and comprehensive, we think that it has also been overused, if not misused, in environmental applications, to the detriment of more purposeful and meticulously validated models. In this article, we review the main limitations of max-stable process models, and strongly argue against their systematic use in environmental studies. Alternative solutions based on more flexible frameworks using the exceedances of variables above appropriately chosen high thresholds are discussed, and an outlook on future research is given. We consider the opportunities offered by hybridizing machine learning with extreme-value statistics, highlighting seven key recommendations moving forward.
This study aimed to create a risk prediction model with artificial intelligence (AI) to identify patients at higher risk of postpartum hemorrhage using perinatal characteristics that may be associated with later postpartum hemorrhage (PPH) in twin pregnancies that underwent cesarean section. The study was planned as a retrospective cohort study at University Hospital. All twin cesarean deliveries were categorized into two groups: those with and without PPH. Using the perinatal characteristics of the cases, four different machine learning classifiers were created: Logistic regression (LR), support vector machine (SVM), random forest (RF), and multilayer perceptron (MLP). LR, RF, and SVM models were created a second time by including class weights to manage the underlying imbalances in the data. A total of 615 twin pregnancies were included in the study. There were 150 twin pregnancies with PPH and 465 without PPH. Dichorionity, PAS, and placenta previa were significantly higher in the PPH-positive group (p = .045, p = .004, p = .001 respectively). In our model, LR with class weight was the best model with the highest negative predictive value. The AUC in our LR with class weight model was %75.12 with an accuracy of 70.73%, a PPV of 47.92%, and an NPV of 85.33% in our data. Although the application of machine learning to create predictive models using clinical risk factors and our model’s 70% accuracy rate are encouraging, it is not sufficient. Machine learning modeling needs further study and validation before being incorporated into clinical use.
Information is provided to navigators through advanced onboard navigation equipment, such as the electronic chart display and information system (ECDIS), radar and the automatic identification system (AIS). However, maritime accidents still occur, especially in coastal and inland water where many navigational dangers exist. The recent artificial intelligence (AI) technology is actively applied in navigation fields, such as collision avoidance and ship detection. However, utilising the aids to navigation (AtoN) system requires more engagement and further exploration. The AtoN system provides critical navigation information by marking the navigation hazards, such as shallow water areas and wrecks, and visually marking narrow passageways. The prime function of the AtoN can be enhanced by applying AI technology, particularly deep learning technology. With the help of this technology, an algorithm could be constructed to detect AtoN in coastal and inland waters and utilise the detected AtoN to create a safety function to supplement watchkeepers using recent navigation equipment.
This interview with Peter Singer AI serves a dual purpose. It is an exploration of certain—utilitarian and related—views on sentience and its ethical implications. It is also an exercise in the emerging interaction between natural and artificial intelligence, presented not as just ethics of AI but perhaps more importantly, as ethics with AI. The one asking the questions—Matti Häyry—is a person, in the contemporary sense of the word, sentient and self-aware, whereas Peter Singer AI is an artificial intelligence persona, created by Sankalpa Ghose, a person, through dialogue with Peter Singer, a person, to programmatically model and incorporate the latter’s writings, presentations, recipes, and character qualities as a renowned philosopher. The interview indicates some subtle differences between natural perspectives and artificial representation, suggesting directions for further development. PSai, as the project is also known, is available to anyone to chat with, anywhere in the world, on almost any topic, in almost any language, at www.petersinger.ai
Cardiovascular disease (CVD) is twice as prevalent among individuals with mental illness compared to the general population. Prevention strategies exist but require accurate risk prediction. This study aimed to develop and validate a machine learning model for predicting incident CVD among patients with mental illness using routine clinical data from electronic health records.
Methods
A cohort study was conducted using data from 74,880 patients with 1.6 million psychiatric service contacts in the Central Denmark Region from 2013 to 2021. Two machine learning models (XGBoost and regularised logistic regression) were trained on 85% of the data from six hospitals using 234 potential predictors. The best-performing model was externally validated on the remaining 15% of patients from another three hospitals. CVD was defined as myocardial infarction, stroke, or peripheral arterial disease.
Results
The best-performing model (hyperparameter-tuned XGBoost) demonstrated acceptable discrimination, with an area under the receiver operating characteristic curve of 0.84 on the training set and 0.74 on the validation set. It identified high-risk individuals 2.5 years before CVD events. For the psychiatric service contacts in the top 5% of predicted risk, the positive predictive value was 5%, and the negative predictive value was 99%. The model issued at least one positive prediction for 39% of patients who developed CVD.
Conclusions
A machine learning model can accurately predict CVD risk among patients with mental illness using routinely collected electronic health record data. A decision support system building on this approach may aid primary CVD prevention in this high-risk population.
A reflective analysis is presented on the potential added value that actuarial science can contribute to the field of health technology assessment. This topic is discussed based on the experience of several experts in health actuarial science and health economics. Different points are addressed, such as the role of actuarial science in health, actuarial judgment, data inputs and their quality, modeling methodologies and the use of decision-analytic models in the age of artificial intelligence, and the development of innovative pricing and payment models.
Large Language Models (LLMs) raises challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. Regarding LLMs, the main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of risks and harms generated by means of LLMs. The epistemological approach examines how LLMs produce outputs, information, knowledge, and a representation of reality in ways that differ from those followed by human beings. To face the impact of LLMs, our paper contends that the epistemological approach should be examined as a priority: identifying risks and opportunities of LLMs also depends on considering how this form of artificial intelligence works from an epistemological point of view. To this end, our analysis compares the epistemology of LLMs with that of law, in order to highlight at least five issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence and (v) relation to truth and accuracy. The epistemological analysis of these issues, preliminary to the normative one, lays the foundations to better frame challenges and opportunities arising from the use of LLMs.
The paper examines the legal regulation and governance of “generative artificial intelligence” (AI), “foundation AI,” “large language models” (LLMs), and the “general-purpose” AI models of the AI Act. Attention is drawn to two potential sorcerer’s apprentices, namely, in the spirit of J. W. Goethe’s poem, people who were unable to control a situation they created. Focus is on developers and producers of technologies, such as LLMs that bring about risks of discrimination and information hazards, malicious uses and environmental harms; furthermore, the analysis dwells on the normative attempt of European Union legislators to govern misuses and overuses of LLMs with the AI Act. Scholars, private companies, and organisations have stressed limits of such normative attempt. In addition to issues of competitiveness and legal certainty, bureaucratic burdens and standard development, the threat is the over-frequent revision of the law to tackle advancements of technology. The paper illustrates this threat since the inception of the AI Act and recommends some ways in which the law has not to be continuously amended to address the challenges of technological innovation.
Recent advancements in Earth system science have been marked by the exponential increase in the availability of diverse, multivariate datasets characterised by moderate to high spatio-temporal resolutions. Earth System Data Cubes (ESDCs) have emerged as one suitable solution for transforming this flood of data into a simple yet robust data structure. ESDCs achieve this by organising data into an analysis-ready format aligned with a spatio-temporal grid, facilitating user-friendly analysis and diminishing the need for extensive technical data processing knowledge. Despite these significant benefits, the completion of the entire ESDC life cycle remains a challenging task. Obstacles are not only of a technical nature but also relate to domain-specific problems in Earth system research. There exist barriers to realising the full potential of data collections in light of novel cloud-based technologies, particularly in curating data tailored for specific application domains. These include transforming data to conform to a spatio-temporal grid with minimum distortions and managing complexities such as spatio-temporal autocorrelation issues. Addressing these challenges is pivotal for the effective application of Artificial Intelligence (AI) approaches. Furthermore, adhering to open science principles for data dissemination, reproducibility, visualisation, and reuse is crucial for fostering sustainable research. Overcoming these challenges offers a substantial opportunity to advance data-driven Earth system research, unlocking the full potential of an integrated, multidimensional view of Earth system processes. This is particularly true when such research is coupled with innovative research paradigms and technological progress.
Edge AI is the fusion of edge computing and artificial intelligence (AI). It promises responsiveness, privacy preservation, and fault tolerance by moving parts of the AI workflow from centralized cloud data centers to geographically dispersed edge servers, which are located at the source of the data. The scale of edge AI can vary from simple data preprocessing tasks to the whole machine learning stack. However, most edge AI implementations so far are limited to urban areas, where the infrastructure is highly dependable. This work instead focuses on a class of applications involved in environmental monitoring in remote, rural areas such as forests and rivers. Such applications have additional challenges, including failure proneness and access to the electricity grid and communication networks. We propose neuromorphic computing as a promising solution to the energy, communication, and computation constraints in such scenarios and identify directions for future research in neuromorphic edge AI for rural environmental monitoring. Proposed directions are distributed model synchronization, edge-only learning, aerial networks, spiking neural networks, and sensor integration.
With recent leaps in large language model technology, conversational AI offer increasingly sophisticated interactions. But is it fair to say that they can offer authentic relationships, perhaps even assuage the loneliness epidemic? In answering this question, this essay traces the history of AI authenticity, historically shaped by cultural imaginations of intelligent machines and human communication. The illusion of human-like interaction with AI has existed since at least the 1960s, when the term “Eliza effect’ was named after the first chatbot Eliza. Termed a “crisis of authenticity” by sociologist Sherry Turkle, the Eliza effect has stood for fears that AI interactions can undermine real human connections and leave users vulnerable to manipulation. More recently, however, researchers have begun investigating less anthropomorphic definitions of authenticity. The expectation - and perhaps fantasy - of authenticity stems, in turn, from a much longer history of technologically mediated communications, dating back to the invention of the telegraph in the nineteenth century. Read through this history, the essay concludes that AI relationships might not mimic human interactions but must instead acknowledge the artifice of AI, offering a new form of companionship in our mediated, often lonely, times.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, the chapter sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU?s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights. Risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, the chapter then examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the key areas demanding further clarifications in order to fill the remedial gaps.
This Element highlights the employment within archaeology of classification methods developed in the field of chemometrics, artificial intelligence, and Bayesian statistics. These run in both high- and low-dimensional environments and often have better results than traditional methods. Instead of a theoretical approach, it provides examples of how to apply these methods to real data using lithic and ceramic archaeological materials as case studies. A detailed explanation of how to process data in R (The R Project for Statistical Computing), as well as the respective code, are also provided in this Element.
Several African countries are developing artificial intelligence (AI) strategies and ethics frameworks with the goal of accelerating responsible AI development and adoption. However, many of these governance actions are emerging without consideration for their suitability to local contexts, including whether the proposed policies are feasible to implement and what their impact may be on regulatory outcomes. In response, we suggest that there is a need for more explicit policy learning, by looking at existing governance capabilities and experiences related to algorithms, automation, data, and digital technology in other countries and in adjacent sectors. From such learning, it will be possible to identify where existing capabilities may be adapted or strengthened to address current AI-related opportunities and risks. This paper explores the potential for learning by analysing existing policy and legislation in twelve African countries across three main areas: strategy and multi-stakeholder engagement, human dignity and autonomy, and sector-specific governance. The findings point to a variety of existing capabilities that could be relevant to responsible AI; from existing model management procedures used in banking and air quality assessment to efforts aimed at enhancing public sector skills and transparency around public–private partnerships, and the way in which existing electronic transactions legislation addresses accountability and human oversight. All of these point to the benefit of wider engagement on how existing governance mechanisms are working, and on where AI-specific adjustments or new instruments may be needed.