To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Use Case 4 in Chapter 7 explores the regulation of MDTs in the context of employment monitoring under the General Data Protection Regulation (GDPR), the Equality Acquis, the Platform Work Directive (PWD), and the Artificial Intelligence Act (AIA). Article 88 GDPR serves as a useful foundation, supported by valuable guidance aimed at protecting employees from unlawful monitoring practices. In theory, most MDT-based practices discussed in this book are already prohibited under the GDPR. Additionally, the EU’s robust equality acquis can effectively address many forms of discrimination in this sector. The AIA reiterates some existing prohibitions related to MDT-based monitoring practices in the workplace. However, a core challenge in employment monitoring lies in ensuring transparency and enforcement. There has long been a call for a lex specialis for data protection in the employment context, which should include a blacklist of prohibited practices or processing operations, akin to the one found in the PWD. Notably, processing and inferring mind data should be included among the practices on this blacklist.
The analysis of MDT regulation across specific use cases – particularly in mental health and well-being, commercial advertising, political advertising, and employment monitoring – suggests that MDTs, especially neurotechnologies, do not necessarily present entirely new legal questions or phenomena. Rather, they occasionally highlight existing deficiencies and weaknesses that have long been recognised, albeit sometimes exacerbating their effects. By strategically adapting and utilising existing laws and legal instruments, substantial improvements can be made to the legal framework governing MDTs. In some cases, stricter regulations are urgently needed, while in others, compliance and enforcement present significant challenges. Although recent legislation has created important opportunities to address these shortcomings, a political consensus has yet to be reached on all necessary aspects. Throughout the book, alternative approaches and adaptations de lege ferenda within both established and newly adopted laws have been proposed as sources of inspiration. The concluding remarks reiterate key legislative adaptations.
Use Case 2 in Chapter 5 examines the regulation of MDTs in the context of commercial advertising under the General Data Protection Regulation (GDPR), the Unfair Commercial Practices Directive (UCPD), and the Audiovisual Media Services Directive (AVSMD). An analysis under the Digital Services Act (DSA) and the Artificial Intelligence Act (AIA) will follow in Chapter 6, alongside a use case focused on political advertising. In the realm of commercial advertising, MDTs intensify long-standing concerns from consumer perspectives. The UCPD serves as a crucial reference point for related laws. Including the processing and inference of mind data in the blacklist outlined in Annex I of the UCPD would have significant implications, akin to the proposed introduction of a sui generis special category of mind data within the GDPR. Importantly, a blanket ban on the processing and inference of mind data for commercial practices under Annex I UCPD would automatically prohibit these practices under the DSA.
Use Case 3 in Chapter 6 examines the regulation of MDTs in the context of political advertising under the General Data Protection Regulation (GDPR), the Regulation on Transparency and Targeting of Political Advertising (TTPA), the Digital Services Act (DSA), and the Artificial Intelligence Act (AIA). The prohibition on advertising based on profiling with special category data in both the DSA and the TTPA does not adequately reflect the capabilities of modern data analytics. Both the DSA and the TTPA fall short in addressing MDTs as stand-alone techniques or as complements to online behavioural advertising and political microtargeting. The AIA’s prohibition of subliminal, manipulative, and deceptive techniques requires a complex set of criteria to be met, with the outcome still uncertain.
In Chapter 2, the classification of data processed by MDTs under the General Data Protection Regulation (GDPR) is examined. While the data processed by MDTs is typically linked to the category of biometric data, accurately classifying the data as special category biometric data is complex. As a result, substantial amounts of data lack the special protections afforded by the GDPR. Notably, data processed by text-based MDTs falls entirely outside the realm of special protection unless associated with another protected category. The book advocates for a shift away from focusing on the technological or biophysical parameters that render mental processes datafiable. Instead, it emphasises the need to prioritise the protection of the information itself. To address this, Chapter 2 proposes the inclusion of a new special category of ‘mind data’ within the GDPR. The analysis shows that classifying mind data as a sui generis special category aligns with the rationale and tradition of special category data in data protection law.
Use case 1 in Chapter 4 explores the regulation of MDTs in the context of mental health and well-being under the General Data Protection Regulation (GDPR), the Medical Devices Regulation (MDR), the Artificial Intelligence Act (AIA), and the European Health Data Space (EHDS) Regulation. The analysis reveals that data protection issues in this sector are not primarily due to deficiencies in the law, but rather stem from significant compliance weaknesses, particularly in applications extending beyond the traditional medical sector. Consumer mental health and well-being devices could greatly benefit from co-regulatory measures, such as a sector-specific data protection certification. Additionally, legislators need to tackle the issue of manufacturers circumventing MDR certification due to ambiguities in the classification model. The EU’s regulatory approach to non-medical Brain–Computer Interfaces (BCIs) within medical devices legislation is highlighted as a potential blueprint and should be advocated in ongoing global policy discussions concerning neurotechnologies.
Critics from across the political spectrum attack social media platforms for invading personal privacy. Social media firms famously suck in huge amounts of information about individuals who use their services (and sometimes others as well), and then monetize this data, primarily by selling targeted advertising. Many privacy advocates object to the very collection and use of this personal data by platforms, even if not shared with third parties. In addition, there is the ongoing (and reasonable) concern that the very existence of Big Data creates a risk of leaks. Further, aside from the problem of Big Data, the very existence of social media enables private individuals to invade the privacy of others by widely disseminating personal information. That social media firms’ business practices compromise privacy cannot be seriously doubted. But it is also true that Big Data lies at the heart of social media firms’ business models, permitting them to provide users with free services in exchange for data which they can monetize via targeted advertising. So unless regulators want to take free services away, they must tread cautiously in regulating privacy.
The area where social media has undoubtedly been most actively regulated is in their data and privacy practices. While no serious critic has proposed a flat ban on data collection and use (since that would destroy the algorithms that drive social media), a number of important jurisdictions including the European Union and California have imposed important restrictions on how websites (including social media) collect, process, and disclose data. Some privacy regulations are clearly justified, but insofar as data privacy laws become so strict as to threaten advertising-driven business models, the result will be that social media (and search and many other basic internet features) will stop being free, to the detriment of most users. In addition, privacy laws (and related rules such as the “right to be forgotten”) by definition restrict the flow of information, and so burden free expression. Sometimes that burden is justified, but especially when applied to information about public figures, suppressing unfavorable information undermines democracy. The chapter concludes by arguing that one area where stricter regulation is needed is protecting children’s data.
This paper critically assesses the effectiveness of the EU AI Act in regulating artificial intelligence in higher education (AIED), with a focus on how it interacts with existing education regulation. It examines the growing use of high-risk AI systems – such as those used in admissions, assessment, academic progression, and exam proctoring – and identifies key regulatory frictions that arise when AI regulation and education regulation pursue overlapping but potentially conflicting aims. Central to this analysis is the concept of human oversight: while the AI Act frames oversight as a safeguard for accountability and fundamental rights, education regulation emphasises the professional autonomy of teachers and their role in maintaining pedagogical integrity. Yet, the regulatory role of teachers in AI-mediated environments remains unclear. Applying Mousmouti’s effectiveness test, the paper evaluates the AI Act along four dimensions – purpose, coherence, results, and structural integration with the broader legal framework – and argues that legal effectiveness in this context requires a more precise alignment between AI and education regulation.
Generative AI has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E, and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly within the ambit of the AI Act, it is imperative to acknowledge that generative AI also engenders substantial tension with data protection laws. The example of generative AI puts a finger on the sore spot of the contentious relationship between data protection law and machine learning built on the unresolved conflict between the protection of individuals, rooted in fundamental data protection rights and the massive amounts of data required for machine learning, which renders data processing nearly universal. In the case of LLMs, which scrape nearly the whole internet, this training inevitably relies on and possibly even creates personal data under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection, and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It has the ability to identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making processes. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the possibility of computational manipulation. It can make a user see and hear what is more likely to affect their decision-making processes, creating the perfect text accompanied by perfect images and sounds on the perfect website. Multiple international, regional and national bodies recognised the existence of computational manipulation and the possible threat to fundamental rights resulting from its use. The EU even moved the first steps towards protecting individuals against computational manipulation. This paper argues that while manipulative AIs which rely on deception are addressed by existing EU legislation, some forms of computational manipulation, specifically if LGAIM is used in the manipulative process, still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing EU legislation to cover every aspect of computational manipulation.
The concept of identifiability remains a foundational yet contentious criterion in European Union (EU) data protection law. Similarly, anonymisation has sparked intense debate.
This paper examines recent developments that have shaped the EU’s approaches to identifiability and anonymisation, including trends in the Court of Justice of the European Union (CJEU) case law, national supervisory authority (SA) assessments of anonymisation processes, and the recent European Data Protection Board (EDPB) Opinion 28/2024 addressing the anonymity of artificial intelligence models and EDPB Guidelines 01/2025 on pseudonymisation.
The paper explores how the balance between over-inclusiveness and under-inclusiveness is being calibrated, suggesting the emergence of a functional definition of personal data in CJEU case law. It underscores the importance of the burden of proof in evaluating anonymisation processes, as confirmed by national SA assessments. Finally, it highlights how to ensure consistency between the GDPR and data sharing mandates stemming from the new generation of EU data regulations.
The exponential growth of cross-border data flows and fragmented national and regional data protection standards have intensified regulatory challenges in global trade. The effects of regulatory divergence are amplified by a lack of transparency, potentially masking discriminatory practices. Article VII of the General Agreement on Trade in Services (GATS) offers a framework for recognition agreements to bridge these gaps but is not utilized in practice. This paper examines the interplay between GATS Article VII and the EU data adequacy decisions – currently the most comprehensive bilateral framework for assessing compatibility between data protection regimes among other WTO members. It argues that data adequacy frameworks qualify as recognition agreements/arrangements under GATS, offering potential to reduce the trade effects of differences in data protection laws globally while safeguarding regulatory autonomy. A roadmap for leveraging Article VII to advance international alignment is developed to help realize the dual goals of enhancing global cooperation and strengthening privacy protection.
The implementation of the General Data Protection Regulation (GDPR) in the EU, rather than the regulation itself, is holding back technological innovation. The EU’s data protection governance architecture is complex, leading to contradictory interpretations among Member States. This situation is prompting companies of all kinds to halt the deployment of transformative projects in the EU. The case of Meta is paradigmatic: both the UK and the EU broadly have the same regulation (GDPR), but the UK swiftly determined that Meta could train its generative AI model using first-party public data under the legal basis of legitimate interest, while in the EU, the European Data Protection Board (EDPB) took months to issue an Opinion that national authorities must still interpret and implement individually, leading to legal uncertainty. Similarly, the case of Deepseek has demonstrated how some national data protection authorities, such as the Italian Garante, have moved to ban the AI model outright, while others have opted for investigations. This fragmented enforcement landscape exacerbates regulatory uncertainty and hampers EU’s competitiveness, particularly for startups, which lack the resources to navigate an unpredictable compliance framework. For the EU to remain competitive in the global AI race, strengthening the EDPB’s role is essential.
With the continued advances in big data analytics and artificial intelligence (AI), it is now possible to rapidly adjust prices of goods and services offered in digital consumer markets. In particular, traders may try to increase their surplus from a purchase based on the availability of a variety of consumers’ data. This may result in different prices being charged to consumers based on their predicted willingness to pay.
The prospects of personalized pricing have sparked a vigorous debate in Europe. Although wide deployment of this practice in the European Union (EU) markets has not been evidenced, it has already become a cause of concern.
The availability of data is a condition for the development of AI. This is no different in the context of healthcare-related AI applications. Healthcare data are required in the research, development, and follow-up phases of AI. In fact, data collection is also necessary to establish evidence of compliance with legislation. Several legislative instruments, such as the Medical Devices Regulation and the AI Act, enacted data collection obligations to establish (evidence of) the safety of medical therapies, devices, and procedures. Increasingly, such health-related data are collected in the real world from individual data subjects. The relevant legal instruments therefore explicitly mention they shall be without prejudice to other legal acts, including the GDPR. Following an introduction to real-world data, evidence, and electronic health records, this chapter considers the use of AI for healthcare from the perspective of healthcare data. It discusses the role of data custodians, especially when confronted with a request to share healthcare data, as well as the impact of concepts such as data ownership, patient autonomy, informed consent, and privacy and data protection-enhancing techniques.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
The central aim of this book is to provide an accessible and comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems more broadly. As these technologies have a growing impact on all domains of our lives, it is increasingly important to map, understand, and assess the challenges and opportunities they raise. This requires an interdisciplinary approach, which is why this book brings together contributions from a stellar set of authors from different disciplines, with the goal of advancing the understanding of AI’s impact on society and how such impact is and should be regulated. Beyond covering theoretical insights and concepts, the book also provides practical examples of how AI systems are used in society today and which questions are raised thereby, covering both horizontal and sectoral themes. Finally, the book also offers an introduction into the various legal and policy instruments that govern AI, with a particular focus on Europe.
This informative Handbook provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI's impact on society and how it should be regulated. This title is also available as Open Access on Cambridge Core.
Strategic litigation plays a crucial role in advancing human rights in the digital age, particularly in cases where data subjects, such as migrants and protection seekers, experience significant power imbalances. In this Article, we consider strategic litigation as part of broader legal mobilization efforts. Although some emerging studies have examined contestation against digital rights and migrant rights separately using legal mobilization frameworks, scholarship on legal mobilization concerning the use of automated systems on migrants and asylum seekers is scarce. This Article aims to address this gap by investigating the extent to which EU law empowers strategic litigants working at the intersection of technology and migration. Through an analysis of five specific cases of contestation and in-depth interviews, we explore how EU data protection law is leveraged to protect the digital rights of migrants and asylum seekers. This analysis takes a socio-legal perspective, analyzing the opportunities presented by EU data protection law and how civil society organizations (CSOs) utilize them in practice. Our findings reveal that the pre-litigation phase is particularly onerous for strategic litigants in this field, requiring a considerable investment of resources and time before even reaching the litigation stage. We illustrate this phase as akin to “climbing a wall,” characterized by numerous hurdles that CSOs face and the strategies they employ to overcome them.