Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Some commentators have said that artificial intelligence (AI) is advancing rapidly in substantial ways toward human-like intelligence. The case may be overstated. Advances in generative AI are remarkable, but large language models (LLMs) are talkers, not doers. Moves toward some kind of robust agency for AI are, however, coming. Humans and their law must prepare for it. This chapter addresses this preparation from the standpoint of contract law and contract practices. For an AI agent to participate as a contracting agent, in a philosophical or psychological sense, with humans in the formation of a contract, the following requirements will have to be met: (1) the AI in question will need to possess the cognitive functions to act with intention and that intention must cause the AI to take a particular action; (2) humans must be in a position to recognize and respect that intention; (3) the AI must have the capacity to engage with humans (and other AI) in shared intentions, meaning the cognitive capacity to share a goal the parties can plan for and execute; (4) the AI will have to have the capacity to recognize and respect the practical authority of law and legal obligation; (5) the AI will have to have the capacity to recognize and respect practical authority in a claim accountability sense, in accepting that a contract forms a binding commitment to others. In other words, the AI will not only have to be able to engage in shared intentionality but also understand and accept it as a binding commitment recognized by the law; and (6) the AI will have to possess the ability to participate in these actions with humans or in some hybrid form with humans.
The development of AI promises to increase innovation and facilitate advancements in multiple fields. Yet, as companies rush products to market in a race for dominance in this highly competitive field, the potential for widespread social harm is foreseeable. In the absence of legislation, commercial law and tort law provide standards and remedies governing new products; however, companies may alter these default laws by contract. This chapter argues that, until there are industry specific regulations governing AI products and services, adhesive contracts that alter the default rules of tort and commercial law should not be enforceable.
The recent crypto winter – and the malfeasance of crypto bad actors – has revealed a difficulty in the developing law of digital property. Although the standard recourse for improperly taking someone else’s rivalrous digital property should be conversion (pay for it) or replevin (give it back), courts have only begun the common law process of articulating standards for these causes of action. In short, the current law invites and incentivizes digital theft because it can be very hard to get digital property back. We argue here that the common law is strongest in technology cases when it proceeds by analogy well-rooted in traditional case law, and that digital conversion and replevin are directly applicable to situations where someone has converted or improperly taken the digital property of another.
More than most innovations, smartphones have transformed the human experience. Most people now live with powerful computational devices within arm’s reach, day and night. By enabling the platform economy and bringing computers closer to the human experience, smartphones also opened new doors for tracking and surveillance. The sum of these changes radically altered the consumer contracting environment, exerting new pressures on the foundations of contract law. This chapter examines key factors in this transformation: unprecedented scale, privacy risks, linguistic complexity, and fundamental asymmetries. In sum, the smartphone era has exacerbated old conundrums in consumer contracting – while also introducing new ones. The net result: a further decoupling of consumer reality and contract law.
This chapter first discusses how Bitcoin works in functional terms (as opposed to technical aspects), focusing on the structure of a decentralized, pseudonymous payment system. The chapter next discusses possible applications of the underlying blockchain technology, such as stock trading, property records, peer-to-peer sharing services, and smart contracts. Turning to the law, the chapter discusses several matters that the Uniform Commercial Code Amendments in 2022 addressed: A legal definition applicable to blockchain technologies, the negotiability of digital assets, the use of digital assets as collateral, and whether cryptocurrencies are money. The chapter then discusses some remaining issues, such as whether bitcoin transactions be traced and whether smart contracts are subject to contract law, and whether parties could opt out of contract law. Finally, the chapter looks specifically at the application of secured lending law to analogous transactions using smart contracts.
This chapter explains why education is a special application domain of AI that focuses on optimizing human learning and teaching. We outline multiple perspectives on the role of AI in education, highlighting the importance of the augmentation perspective in which human learners and teachers closely collaborate with AI supporting human strengths. To illustrate the variety of AI applications used in the educational sector, we provide an overview of students-faced, teacher-faced, and administrative AI solutions. Next, we discuss the ethical and social impacts of AI in education and outline how ethics in AI and education have developed from the Beijing consensus after UNESCO’s conference on AI in Education 2019, to the recent European ethical guidelines on the use of AI and data in teaching and learning for educators. Finally, we introduce an example of the Dutch value compass for the digital transformation of education and the embedded ethics approach of the National Education Lab AI around developing and cocreating new intelligent innovations in collaboration with educational professionals, scientists, and companies.
AI brings risks but also opportunities for consumers. When it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI also poses major challenges. This chapter discusses both the challenges and opportunities of AI in the consumer context (Section 10.2 and 10.3) and provides a brief overview of some of the relevant consumer protection instruments in the EU legal order (Section 10.4). A case study on dark patterns illustrates the shortcomings of the current consumer protection framework more concretely (Section 10.5).
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
The availability of data is a condition for the development of AI. This is no different in the context of healthcare-related AI applications. Healthcare data are required in the research, development, and follow-up phases of AI. In fact, data collection is also necessary to establish evidence of compliance with legislation. Several legislative instruments, such as the Medical Devices Regulation and the AI Act, enacted data collection obligations to establish (evidence of) the safety of medical therapies, devices, and procedures. Increasingly, such health-related data are collected in the real world from individual data subjects. The relevant legal instruments therefore explicitly mention they shall be without prejudice to other legal acts, including the GDPR. Following an introduction to real-world data, evidence, and electronic health records, this chapter considers the use of AI for healthcare from the perspective of healthcare data. It discusses the role of data custodians, especially when confronted with a request to share healthcare data, as well as the impact of concepts such as data ownership, patient autonomy, informed consent, and privacy and data protection-enhancing techniques.
Artificial intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility, and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but often fail to suffice due to the context-sensitivity of ethical challenges. Second, this chapter discusses methods to tackle these challenges. Main ethical theories (such as virtue ethics, consequentialism, and deontology) are shown to provide a starting point, but often lack the details needed for actionable AI ethics. Instead, we argue that mid-level philosophical theories coupled to design-approaches such as “design for values”, together with interdisciplinary working methods, offer the best way forward. The chapter aims to show how these approaches can lead to an ethics of AI that is actionable and that can be proactively integrated in the design of AI systems.
In spring 2024, the European Union formally adopted the AI Act, aimed at creating a comprehensive legal regime to regulate AI systems. In so doing, the Union sought to maintain a harmonized and competitive single market for AI in Europe while demonstrating its commitment to protect core EU values against AI’s adverse effects. In this chapter, we question whether this new regulation will succeed in translating its noble aspirations into meaningful and effective protection for people whose lives are affected by AI systems. By critically examining the proposed conceptual vehicles and regulatory architecture upon which the AI Act relies, we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.