Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Failures of environmental law to preserve, protect and improve the environment are caused by law’s contingency and constitutional presumptions of supremacy over the self-regulatory agency of nature. Contingency problems are intrinsic to law and, therefore, invite deployment of technologies. Constitutional presumptions can be corrected through geo-constitutional reform. The latter requires the elaboration of geo-constitutional principles bestowing authority on nature’s self-regulatory agency. It is suggested that principles of autonomy, loyalty, pre-emption, supremacy and rights have potential to serve that aim and imply proactive roles for technologies in environmental governance. Geo-constitutional reform is necessary to prevent the fatal collapse of the natural regulatory infrastructure enabling life and a future of environmental governance by design. Once environmental catastrophe has materialized, however, geo-constitutionalism loses its raison d’être.
This chapter argues that, as evidenced by EU digital law and EU border management, the EU legislature is complicit in the creation of complex socio-technical systems that undermine core features of the EU’s legal culture. In the case of digital law, while the EU continues to govern by publicly declared and debated legal rules, the legal frameworks – exemplified by the AI Act – are excessively complex and opaque. In the case of border management, the EU increasingly relies not on governance by law but on governance by various kinds of technological instruments. Such striking departures from the EU’s constitutive commitments to the rule of law, democracy and respect for human rights, are more than a cause for concern; they raise profound questions about what it now means to be a European.
This chapter challenges the conventional wisdom of how users of social media platforms such as Instagram, X, or TikTok pay for service access. It argues that rather than merely exchanging data for services, users unknowingly barter their attention, emotions, and cognitive resources – mental goods that corporations exploit through technologically managed systems like targeted advertising and habit-forming design. The chapter explores how these transactions are facilitated not by legal contracts but by code, which allows social media companies to extract value in ways that traditional legal conceptual frameworks cannot capture. It further highlights the negative externalities of these exchanges, such as cognitive impairments and mental health issues, framing them as pollution byproducts of the attention economy. By examining both the visible and hidden dimensions of this technologically mediated exchange, the chapter calls for a deeper understanding of the mechanisms that govern our interactions with digital platforms rather than rushing to propose new legal solutions.
Advanced AI (generative AI) poses challenges to the practice of law and to society as a whole. The proper governance of AI is unresolved but will likely be multifaceted (soft law such as standardisation, best practices and ethical guidelines), as well as hard law consisting of a blend of existing law and new regulations. This chapter argues that ‘lawyer’s professional codes’ of conduct (ethical guidelines) provide a governance system that can be applied to the AI industry. The increase in professionalisation warrants the treating of AI creators, developers and operators, as professionals subject to the obligations foisted on the legal profession and other learned professions. Legal ethics provides an overall conceptual structure that can guide AI development serving the purposes of disclosing potential liabilities to AI developers and building trust for the users of AI. Additionally, AI creators, developers and operators should be subject to fiduciary duty law. Fiduciary duty law as applied to these professionals would require a duty of care in designing safe AI systems, a duty of loyalty to customers, users and society not to create systems that manipulate consumers and democratic governance and a duty of good faith to create beneficial systems. This chapter advocates the use of ethical guidelines and fiduciary law not as soft law but as the basis of structuring private law in the governance of AI.
Law’s governance seemingly faces an uncertain future. In one direction, the alternative to law’s governance is a dangerous state of disorder and, potentially, existential threats to humanity. That is not the direction in which we should be going, and we do not want our escalating discontent with law’s governance to give it any assistance. Law’s governance is already held in contempt by many. In the other direction, if we pursue technological solutions to the imperfections in law’s governance, there is a risk that we diminish the importance of humans and their agency. If any community is contemplating transition to governance by technology, it needs to start its impact assessment with the question of whether the new tools are compatible with sustaining the foundational conditions themselves.
This chapter analyses the public and private governance structure of the EU AI Act (AIA) and its associated ecosystem of compliance and conformity. Firstly, the interaction of public and private governance in the making of AI law meant to concretise the rules in the AIA is analysed. Secondly, the focus shifts to the interaction of public and private governance in the Act’s enforcement through compliance, conformity and public authorities. Thirdly, it is argued that the EU legislature has neither fully developed public private governance nor the interaction between the two. As a result, there are many gaps in the involvement of civil society in compliance, conformity and enforcement of private regulations, in particular harmonized technical standards, Codes of Practice and Codes of Conduct. Moreover, the extreme complexity of the AIA’s governance structure is likely to trigger litigation between AI providers and deployers and the competent surveillance authorities, or more generally in B2B and B2C relations.
This chapter examines three reasons for discontent with law’s governance of technology. Reservations concern the exercise of legal powers, the convenience of legal regulations, and prestige. The analysis is supplemented with the impact that the pace of technological innovation has on legal systems and the distinction between internal and external problems of legal governance. The internal problems regard the efficacy, efficiency, and overall soundness of the normative acts; the external problems are related to the claims of further regulatory systems in society, such as the forces of the market, or of social customs. By following the recommendations of Leibniz in the sixth paragraph of his Discourse on Metaphysics, the overall idea is to discuss the simplest possible hypothesis to attain the richest world of phenomena. Discontent with law’s governance of technology is indeed a complex topic with manifold polymorphous ramifications.
This chapter presents an extended critique of the Quoine case in Singapore where the seven trades at issue were fully automated. The central point of the case is that one or both contracting parties decided to deploy or rely on technological assistance and that does not in itself justify a departure or a deviation from long-standing legal principles of contract law. While there is no denying that the contracting process can be optimized by means of a broad range of technologies of varying complexity and that such technologies often create unique risks, it does not follow that such technologies have a disruptive effect on contract law itself. Innovation in commercial dealings need not lead to an innovation in contract law. To the contrary, the latter has shown a surprising resilience to technological disruption, mainly due to the broad, flexible, and technology neutral formulation of its core principles.
The online environment has proven over the last thirty years to be a crucible for the study of legal authority, legitimacy and reception. The overlapping claims of local and global lawmakers are now magnified beyond the scope of what was possible before this global, virtual telecommunications space was opened to individuals and communities. Law is a mix of the local and the regional. We have come to recognise the transnational nature of law with decentred sources of authority claims such as the European Union. What the online digital environment has opened is a digital ‘right to roam’.
This chapter examines some ways in which human agency might be affected by a transition from legal regulation to regulation by AI. To do that, it elucidates an account of agency, distinguishing it from related notions like autonomy, and argues that this account of agency is both philosophically respectable and fits common sense. With that account of agency in hand, the chapter then examines two different ways – one beneficial, one baleful – in which agency might be impacted by regulation by AI, focussing on some agency-related costs and benefits of transforming private law from its current rule-based regulatory form to an AI-enabled form of technological management. It concludes that there are few grounds to be optimistic about the effects of such a transition and good reason to be cautious.
This chapter argues that care must be taken when considering whether law reform is essential in light of new technologies and their applications. The application of each new technology raises its own issues, and not all of these will invariably require legal change – but some undoubtedly will because the issues raised are beyond the reach of existing laws. In line with this argument, a sketch is presented of a methodical approach for determining whether and how consumer protection law should be reformed in the face of technological developments. Focusing on the need to determine the precise challenges the new technology poses invites an open mind to the legal reform response, and it is important to test each option (tweaking existing rules, creating of analogous rules attuned to the digital and technological advances, or new models of regulation including solutions focused on technological applications rather than consumer rights) to find the best mix of responses, subject to the overriding requirement to ensure that consumer protection is not diluted. This approach is then tested in respect of two areas, the reform of the EU’s Product Liability regime and the arrival of digital assistants which will enable algorithmically automated contracting.
Legal ‘regulatory escape’, ‘regulatory disconnection’ or ‘regulatory disruption’ on the part of particular regulatees or commercial practices has been observed across diverse regulatory environments, ranging from environmental protection to provision of gambling services. Instances of legal regulatory escape appear particularly prevalent with the introduction of novel technology products and services. Evaluation of technology-related legal regulatory escapes provides examples of deliberate, even overt, evasion of legal constraints, as well as avoidance via practices such as regulatory mimicry or differentiation. This chapter identifies examples of recent legal technology-related ‘regulatory escape’, discusses key reasons why legal regulation may fail to effectively cater for complications arising from specific technology practices, products or classes of regulatee and considers possible regulatory responses to address the risks, or capture the benefits, of technological advances.