To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The legal treatment of autonomous algorithmic collusion in light of its technical feasibility and various theoretical considerations is an important issue because autonomous algorithmic collusion raises difficult questions concerning the attribution of conduct by algorithms to firms and reopens the longstanding debate about the legality of tacit collusion. Algorithmic collusion, namely, direct communication between algorithms, which amounts to express collusion, is illegal. Intelligent and independent adaptation to competitors’ conduct by algorithms with no direct communication between them, which is tacit collusion, is generally legal. There should be ex ante regulation to reduce algorithmic collusion.
Consumers are at the forefront of market uses of AI. There are also myriad consumer uses of AI products. Consumer protection law justifies greater responses where the interactions involve significant risks and relevant consumer vulnerability; both such elements are present in the current and predicted AI uses concerning consumers. Whilst consumer protection law is likely to be able to be sufficiently flexible to adapt to AI, there is a need to recalibrate consumer protection law to AI.
The EU definitions of AI moved from a narrow one to a broad one because of the EU policy which is to govern the phenomenon of AI in the broadest way that includes a wide range of situations. The key contents of the main EU AI documents including the European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics, the Ethics Guidelines for Trustworthy AI, the proposed AI Act, and the recent Proposal for an AI Liability Directive, are examined.
The protection of AI-assisted and AI-generated works causes problems for existing intellectual property law. However, it is doubtful whether the purposes of patent law would be served by granting patents for AI-generated inventions. Further, AI systems are unable to make the creative choices to bring their outputs into the realm of copyright protection. However, with AI-assisted outputs, there may still be sufficient creative choices by the programmer or user to bring the output into the domain of IP protection. AI fundamentally challenges the anthropocentric copyright regime. AI technologies will require us to rethink fundamental concepts within IP law, including, for instance, the standard of obviousness applied within patent law.
The UK Parliament has already pre-emptively legislated for a compensation solution for autonomous vehicle accidents through the Automated and Electric Vehicles Act 2018. The Act is a response to the fact that the ordinary approach to motor vehicle accidents cannot apply in an AV context since there is no human driver. Tort law has previously been subjected to major shifts in response to motor vehicles, and we are again on the cusp of another motor-vehicle-inspired revolution in tort law. However, in legislating for AV accidents in the UK, there was inadequate consideration of alternative approaches.
On AI-assisted adjudication, concerns including biases (such as automation bias, anchoring bias, contrarian bias, and herd bias) and ethical worries (such as human adjudicators ceasing to be decision-makers, excessive standardisation of decisions, and the fact that judges may be pressured to conform to the AI’s predictions) can be addressed. Adjudicators may use AI to assist them in their decisions in three aspects: training and implementation; actual use; and monitoring. Because AI will not be able to provide the legal justifications underlying its predictions, the human adjudicator will have to explain why the AI-generated prediction is legally justified. AI will not replace adjudicators.
The robo-advice industry is one of the fastest-growing ‘AI’-powered automated services that may be transforming access to investment advice. However, the robo-advice industry has settled on low-cost, standardised advice linked to relatively non-complex financial products that are offered to customers, rather than aiming for intelligent and personalised advisory tailoring. The regulatory regime for investment advice plays a significant part in delineating the boundaries of legal risk for the industry, therefore shaping the design of robo-advice as an innovation.
The AI agency problem is overstated, and many of the issues concerning AI contracting and liability can be solved by treating artificial agents as instrumentalities of persons or legal entities. AI systems should not be characterised as agents for liability purposes. This approach best accords with their functionality and places the correct duties and responsibilities on their human developers and operators.
There is a convergence on the protection of the traditional right to privacy and today’s right to data protection as evidenced by judicial rulings. However, there are still distinct differences among the jurisdictions based on how personal data is conceived (as a personality or proprietary right) and on the aims of the regulation. These have implications for how the use of AI will impact the laws of US and EU. Nevertheless, there are some regulatory convergences between US and EU law in terms of the realignment of traditional rights through data-driven technologies, the convergence between data protection safeguards and consumer law, and the dynamics of legal transplantation and reception in data protection and consumer law.
Because the use of robo-advisers creates significant risks in addition to the rewards that their use offers, the question of how they ought to be regulated has been and continues to be a subject of substantial debate. The various models for regulating the robo-adviser industry are examined. The best model for regulating the robo-adviser industry is a mix of mandatory disclosure; fiduciary duties for those developing, marketing, and operating robo-advisers; investor education; and regulation by litigation. Further, robo-advisers ought to be regulated by the regular standardised surveying of the investors who are using them and the release of that data to the general public.
How AI has and could impact the content, application and processes of corporate law and corporate governance, and the interactions of corporate law actors, including boards, shareholders and regulators, are critically examined. The current and future impact of AI and related technologies on corporate law and corporate governance norms and practices are also analysed.
Some of the most important challenges flowing from the rise of algorithmic management for employment law, broadly conceived as encompassing both the individual and collective dimension of employment relation, as well as associated regulatory domains, including data protection and anti-discrimination law, insofar as they are relevant to the employment context, are examined.
The differences between AI software and normal software are important as these have implications for how a transaction of AI software will be treated under sales law. Next, what it means to own an AI system – whether it is a chattel, merely a software, or something more than a software – is explored. If AI is merely a software, it will be protected by copyright, but there will be problems with licensing. But if AI is encapsulated in a physical medium, the transaction may be treated as one of the sale of goods, or a sui generis position may be taken. A detailed analysis of the Court of Justice of the European Union’s decision in Computer Associates v The Software Incubator is provided. An AI transaction can be regarded as a sale of goods. Because the sale of goods regime is insufficient, a transaction regime for AI systems has to be developed, which includes ownership and fair use (assuming AI is regarded as merely a software) and the right to repair (whether AI is treated as goods or software).
There are two core problems with private law’s causal rules in an AI context: 1) a problem of proof due to opacity, and 2) autonomy. Further, if AI is capable of being considered an intervening agent, using AI would have liability-avoiding effects. There may be particular problems with informational and decisional AI. Consideration is given to whether, in certain contexts, AI justifies a departure from the ordinary principles of causation.