To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The EU is attempting to indirectly regulate the Internet of Things by improving access to data through a cross-sectoral data governance framework. On the face of it, recent EU data governance laws – Data Governance Act, Digital Markets Act, Digital Services Act, AI Act – go in the direction of more open, accessible, and reusable data. However, they tend to balance that ethos with provisions that IoT big tech can use to retain and strengthen data enclosures. This chapter aims to critically assess whether the attempt to balance openness and IP results in the prevalence of closed IoT systems, thus ultimately preventing smart data from reuse that would otherwise benefit society at large.
The auction of Bored Ape #8817 for $3.4 million in October 2021 marked a watershed moment in the escalating trend of non-fungible tokens (NFTs). This chapter ventures into the core of the tokenization phenomenon, scrutinizing the legal implications of creating digital representations (tokens) of diverse assets. Amid the burgeoning NFT market, a pivotal question emerges: What precisely are the property rights conferred upon those acquiring these tokens? Beyond the staggering sales figures, the chapter dissects the tokenization process, emphasizing the NFT minting process and blockchain technology. It explores claims that NFTs herald the future of digital property, challenging traditional governmental powers. Anticipating legal challenges, the chapter navigates critical inquiries about token holders’ rights, the tethering (or not) of tokens to underlying assets, and the impact of the 2022 Uniform Commercial Code revisions. This chapter seeks to provide a nuanced perspective, unraveling legal realities from the fervor surrounding tokenization’s transformative potential in the digital era.
Large Language Models can be used to summarize and simplify complex texts. In this study, we investigate the extent to which state-of-the-art models can reliably operate as “smart readers”: applications that empower consumers to tackle lengthy, difficult-to-read, and inaccessible standard form contracts and privacy policies.
Recent years have seen new technologies disrupt many established industries and institutions, continually testing our imaginations and expectations. Accordingly, it is no surprise that technology is disrupting the law. Moreover, the COVID-19 pandemic generated new disputes and a need for expanded access to online means for resolving those disputes, especially for consumers. As a result, lawyers, judges, software developers, and policymakers have been exploring ways to utilize technology in expanding access to the courts and dispute resolution. With this in mind, scholars and policymakers have argued for “online dispute resolution” (ODR) to expand access to justice (A2J). This chapter discusses the evolution of ODR in recent years, as well as emerging issues in ODR that deserve attention in order to craft ODR that lives up to the promise in advancing A2J.
The past decade has seen a plethora of new product lines in “smart” consumer goods and systems thanks to technological developments that have allowed for the computerization and internet connectivity of many previously “dumb” objects, buildings, and environments. These new products will inevitably develop defects that require resolutions, and add to the ever-growing problem of e-waste. This chapter examines the recommendations of the recent Productivity Commission (PC) Inquiry regarding the “right to repair” (R2R) through the lens of cyber-physical devices and systems, such as the Internet of Things. A stronger R2R for independent repairers (individual, community, or commercial), particularly in the context of these new products, would assist in achieving several of the United Nations’ sustainable development goals. The PC Inquiry has produced some recommendations that will strengthen the R2R in Australia. However, these recommendations, while welcome, also contain some significant gaps in relation to promoting sustainable consumption by consumers.
The transformative impact of artificial intelligence (AI) across various sectors, with recent advancements, such as the release of the generative AI model GPT-4, raises critical legal and policy concerns. These concerns include important societal and potentially existential impacts: Threats to democracy, workforce displacement, copyright challenges, environmental effects, new and more lethal cybersecurity threat vectors, and the potential for AI advanced to become uncontrollable or be used for malicious purposes if it falls into the wrong hands. Human rights concerns are also implicated, including the potential for biased and discriminatory decision-making, unreasonable privacy impacts, inaccurate and unfair outcomes, and lack of transparency and due process. The unveiling of GPT-4 emphasizes the need for legislation to address these issues. The European Union (EU) has taken a global lead by enacting the Artificial Intelligence Act (AIA) to regulate AI development, placement, and use, and by proposing the AI Liability Directive (AILD), which aims to facilitate civil claims for damages arising from AI products and services. The AIA takes a comprehensive, risk-based approach to regulating AI across sectors. Significant differences had to be negotiated among the EU co-legislators to reach a consensus on the final text of the AIA, such as defining AI systems, regulating foundation models, determining bans on specific AI systems, and establishing redress rights for consumers and fundamental rights violations. The chapter explores the global context, the EU legislative approach, the key issues that had to be resolved, and the interaction of the AIA with other EU laws, particularly with the General Data Protection Regulation (GDPR).
From a distance, smart contracts seem exciting: Unlike humans, who might opportunistically decide to deviate from the agreed terms, their code will execute “no-matter-what,” ensuring the terms are adhered to and the contract is performed. Smart contracts would thus seem like a valuable addition to conventional contracts. A perfect transaction technology, indeed! A closer analysis of the smart contract narrative and the relevant technical scholarship reveals a peculiar dissonance between how smart contracts are described and what smart contracts really are. Taking the unfortunate terminology at face value and analyzing smart contracts as if they were contracts in the legal sense might constitute a waste of academic time. Even if they constituted an improvement over existing transacting practices, would – or could – smart contracts still be contracts? Would they even belong to the same category of legal phenomena? Maybe the fundamental question is: what are smart contracts? To many, these questions may seem like unnecessary hairsplitting, typical of haughty academics. In practice, however, how something is defined and categorized has immediate practical implications. Sidestepping the overly optimistic narrative of “unstoppable legal innovation,” this chapter deconstructs the concept of smart contracts and aims to provide a more commonsensical and factual grounding for future legal analyses of this phenomenon.
Credit card processing relies deeply on technology, so it is no surprise that technological forces are responsible for some of the problems with opaque pricing in this market. Technology made modern credit card processing possible by speeding up the transactions and making transactions less expensive. But this same technology made pricing harder for merchants to understand and compare among different credit card processors. Academic scholarship has failed to address nontransparent pricing for merchant card processing, and laws in various countries are focused on interchange fees, not merchant fees. This chapter argues that legal academics should study credit card processing fees and that regulators should use Canadian laws as an example of how to foster transparency.
Advances in technology increasingly inform how consumers make sense of the world and how organizations do business, resulting in complex dynamics between designers who define and craft products, companies who sell them, and consumers who use them. They also contribute to new legal and regulatory issues and potential market interventions related to risk mitigation and consumers’ susceptibility to errors of judgment due to cognitive biases. This chapter takes a behavioral, human-centered perspective to explore these emergent legal issues through three key lenses: (1) how contemporary digital products deliver consumer value, contribute to new forms of economic and noneconomic currency, and harness infrastructure that balances paternalistic oversight with consumer agency; (2) how digital products’ features shape consumer engagement with other individuals, products themselves, and the companies that produce these products; and finally, (3) how increasingly sophisticated data-driven technologies, such as generative AI and machine learning, create asymmetrical relationships between producers and consumers who often lack the conceptual models necessary to comprehend tradeoffs and terms of exchange fully.
Some commentators have said that artificial intelligence (AI) is advancing rapidly in substantial ways toward human-like intelligence. The case may be overstated. Advances in generative AI are remarkable, but large language models (LLMs) are talkers, not doers. Moves toward some kind of robust agency for AI are, however, coming. Humans and their law must prepare for it. This chapter addresses this preparation from the standpoint of contract law and contract practices. For an AI agent to participate as a contracting agent, in a philosophical or psychological sense, with humans in the formation of a contract, the following requirements will have to be met: (1) the AI in question will need to possess the cognitive functions to act with intention and that intention must cause the AI to take a particular action; (2) humans must be in a position to recognize and respect that intention; (3) the AI must have the capacity to engage with humans (and other AI) in shared intentions, meaning the cognitive capacity to share a goal the parties can plan for and execute; (4) the AI will have to have the capacity to recognize and respect the practical authority of law and legal obligation; (5) the AI will have to have the capacity to recognize and respect practical authority in a claim accountability sense, in accepting that a contract forms a binding commitment to others. In other words, the AI will not only have to be able to engage in shared intentionality but also understand and accept it as a binding commitment recognized by the law; and (6) the AI will have to possess the ability to participate in these actions with humans or in some hybrid form with humans.
The development of AI promises to increase innovation and facilitate advancements in multiple fields. Yet, as companies rush products to market in a race for dominance in this highly competitive field, the potential for widespread social harm is foreseeable. In the absence of legislation, commercial law and tort law provide standards and remedies governing new products; however, companies may alter these default laws by contract. This chapter argues that, until there are industry specific regulations governing AI products and services, adhesive contracts that alter the default rules of tort and commercial law should not be enforceable.
The recent crypto winter – and the malfeasance of crypto bad actors – has revealed a difficulty in the developing law of digital property. Although the standard recourse for improperly taking someone else’s rivalrous digital property should be conversion (pay for it) or replevin (give it back), courts have only begun the common law process of articulating standards for these causes of action. In short, the current law invites and incentivizes digital theft because it can be very hard to get digital property back. We argue here that the common law is strongest in technology cases when it proceeds by analogy well-rooted in traditional case law, and that digital conversion and replevin are directly applicable to situations where someone has converted or improperly taken the digital property of another.
More than most innovations, smartphones have transformed the human experience. Most people now live with powerful computational devices within arm’s reach, day and night. By enabling the platform economy and bringing computers closer to the human experience, smartphones also opened new doors for tracking and surveillance. The sum of these changes radically altered the consumer contracting environment, exerting new pressures on the foundations of contract law. This chapter examines key factors in this transformation: unprecedented scale, privacy risks, linguistic complexity, and fundamental asymmetries. In sum, the smartphone era has exacerbated old conundrums in consumer contracting – while also introducing new ones. The net result: a further decoupling of consumer reality and contract law.