Skip to main content Accessibility help
×
Hostname: page-component-857557d7f7-ksgrx Total loading time: 0 Render date: 2025-11-25T04:24:27.394Z Has data issue: false hasContentIssue false

15 - Online Disinformation, Microtargeting, and Freedom of Expression

Moving beyond Human Rights Law?

from Introduction to Part III

Published online by Cambridge University Press:  24 October 2025

Tiina Pajuste
Affiliation:
Tallinn University

Summary

The rise of microtargeted online disinformation (MOD) has raised concerns over its harms to democracy and human rights. Debates over the regulation of MOD crystallise around Article 10 of the European Convention on Human Rights, the right to freedom of expression, and its limited capacity to regulate disinformation. As the chapter demonstrates, the effects of disinformation are compounded by microtargeting techniques. These facilitate the surgical spread of information to homogeneous groups, based on the analysis of people’s personal data. The chapter contends that human rights protection has shifted from human rights law to other legal regimes. They centre on the protection of personal data, the regulation of online platforms and search engines and the technological systems that propel them, and the use of targeted political advertising. The chapter demonstrates this claim with reference to selected European Union legal instruments, discussing their capacity to address the harmful effects of MOD. It will be argued that the broadening of human rights protection beyond human rights law should be welcomed, but it also has significant limitations, including enforcement gaps and wide-ranging scope for exemptions.

Information

Type
Chapter
Information
Human Rights in the Digital Domain
Core Questions
, pp. 308 - 332
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

15 Online Disinformation, Microtargeting, and Freedom of Expression Moving beyond Human Rights Law?

15.1 Introduction

Concerns over the harmful impact of online disinformation on the integrity of electoral processes, on democratic political cultures and values, and on human rights arose globally in 2016, when the UK held its referendum to leave the European Union (EU), and Donald Trump was first elected as president of the US.Footnote 1 Almost ten years later, following the experiences of the COVID-19 pandemic, wars in Ukraine and in Gaza, and the return of Trump to the US presidency, these concerns have not receded.Footnote 2 Worries over disinformation, colloquially referred to as ‘fake news’,Footnote 3 centre on its global reach and ease of access, but also on the thorny challenge of regulating online communication, and with it the power of platforms and search engines that facilitate the dissemination of inaccurate and misleading content. This chapter argues that the disinformation problematic is compounded by the proliferation of fake news via the practice of microtargeting, a term that describes the surgical spread of political and other messages to homogeneous social groups, drawing on the analysis of people’s personal data.Footnote 4 The chapter contends that the spread of microtargeted disinformation is of concern to human rights and democracy, as it distorts and fragments the information ecosystem, with harmful consequences for the right to freedom of expression and for democratic discourse. Data analytics techniques, which underpin microtargeting and serve as a vector for the dissemination of fake news, can lead to voter surveillance and interfere with the right to privacy. Elucidating the phenomenon of microtargeted online disinformation (MOD) and its effects on human rights and democracy propels my discussion, which is guided by three questions: What harms to human rights and democracy are produced by MOD? How can human rights law respond to these harms? What are the limits of human rights law?

How to respond to worries over MOD poses complex conceptual, sociological, and legal challenges. Regulatory efforts that seek to curb disinformation rub against legal protections of the right to freedom of expression, which is enshrined, for example, in Article 10 of the European Convention on Human Rights (ECHR) and in Article 11 of the Charter of Fundamental Rights of the EU.Footnote 5 There is also a real need to map and analyse the impact of microtargeted disinformation on a wider range of rights, including the right to privacy, and to unpack its broader implications for Article 10, such as its effect on the rights of minoritised voices. Paradoxically, despite the harmful effects of microtargeted disinformation on human rights, the chapter asserts that human rights law is ill-suited to address the full range of these harms. Considering the processing of personal data in the dissemination of disinformation, the chapter suggests that the protection of human rights has become displaced onto other legal regimes, such as data protection law, and is increasingly reliant on legal instruments with horizontal effect. Acknowledging the extensive regulatory activities within the EU, the chapter analyses a ‘European approach’Footnote 6 to microtargeted disinformation, examining selected recent EU legislative initiatives, the General Data Protection Regulation (GDPR),Footnote 7 the Digital Services Act (DSA),Footnote 8 the Artificial Intelligence Act (AIA),Footnote 9 and the Regulation on Transparency Political Advertising,Footnote 10 and considers their capacity to regulate MOD and to mitigate potential harms to human rights and democracy. There are further reasons for examining the EU’s regulatory initiatives. First, EU legal instruments, such as the GDPR, are said to provide a regulatory ‘gold standard’, which is emulated in non-EU jurisdictions. Second, the extraterritorial dimension of EU law, the much invoked ‘Brussels effect’,Footnote 11 sets regulatory standards beyond the jurisdictional borders of the EU. Third, while EU human rights provisions defer to the norm-setting power of the Council of Europe and the jurisprudence of the European Court of Human Rights (ECtHR), EU secondary law, specifically EU regulations, provide bespoke regulatory tools with horizontal direct effect.

The chapter is structured as follows. Section 15.2 expounds the phenomenon of online disinformation and surveys concerns about the threat of disinformation to human rights and democracy. Drawing on the ECHR, specifically Article 10, and on selected case law of the ECtHR, Section 15.3 discusses the intersection of the right to freedom of expression with online disinformation and analyses the potential consequences for the regulation of disinformation and for a wider suite of Convention rights. Section 15.4 centres on the role and impact of microtargeting in the dissemination of disinformation, while Section 15.5 examines selected regulatory initiatives in the EU, focusing on the GDPR; the DSA, the AIA, and the Regulation on Transparency Political Advertising. Section 15.6 summarises the main points covered in the chapter and identifies areas that require further work.

15.2 From Fake News to (Online) Disinformation: Challenges for Human Rights and Democracy

The rapid spread of online communication and the attendant ease of access to online content, facilitated by the rise of social media platforms such as Facebook, X, or TikTok, and by search engines such as Google, has enhanced the capacity for disseminating and receiving information, and increased the range and scope of civic engagement. These new opportunities for communicating and connecting have been welcomed as a way of informing and empowering people, and helping them to enjoy their human rights, such as the right to assembly and association.Footnote 12 However, online content can also propel the dissemination of inaccurate and frequently misleading content to levels previously unimaginable. The term ‘fake news’, which was popularised during Donald Trump’s first tenure as US president (2017–21) has become shorthand for this type of content. Despite its widespread use, ‘fake news’ lacks an agreed definition and there is no consensus on the range of expressions that it refers to. These can vary from the entertaining barb of political satire to maleficent attempts that seek to damage public trust and confidence in the integrity of electoral processes and elected representatives,Footnote 13 and in public policy. For example, concerns over fake news accompanied the UK Brexit referendum in 2016 and the US presidential elections of 2016 and 2020.Footnote 14 Fake news stories included inaccurate reports about Turkey joining the EU, playing on fears of UK voters over immigration, and erroneous claims that the 2020 US presidential election was ‘stolen’.Footnote 15 There have also been concerns about mistaken and misleading information with respect to the COVID-19 pandemic that has sought to undermine public policy efforts aimed at combating the disease.Footnote 16 Moreover, it should be stressed that faking is not limited to the dissemination of written text, such as tweets or Facebook posts. It extends to the manipulation of voices and images – so-called deepfakes – that can heighten mistrust in political leaders, institutions, and even national security.Footnote 17 Deepfakes gained notoriety when a speech delivered by Nancy Pelosi, the former leader of the Democrats in the United States House of Representatives, was altered to make it sound slurred. This alteration created the inaccurate impression that Pelosi was intoxicated. Such alteration is deceitful, and it can undermine the credibility of democratically elected politicians or of those preparing to stand for public office.Footnote 18 Furthermore, despite the contemporary interest in online disinformation, it is worth noting that fake news is not an invention of the digital age: from Octavian’s fake news info war against Mark Anthony,Footnote 19 to the humorous 1835 hoax of batmen hunting bison on the moon,Footnote 20 and Joseph Goebbels’ Nazi propaganda apparatus, fake news has been part of public political life for more than two millennia. However, developments in the field of digital technologies, propelled by advances in artificial intelligence (AI), have put the propensity to spread fake news on steroids: fake news created by a few can reach millions of users at the push of a button.

How to deal with fake news has emerged as a central issue for policymakers and scholars. One of the key challenges relates to the aptness of the expression ‘fake news’. Despite its widespread use, the term has been discarded as loaded, deployed to discredit political opponents and critical media coverage of politicians and policies. For example, the UK House of Commons Digital, Culture, Media and Sports Committee report Disinformation and ‘fake news’: Final Report,Footnote 21 in one of the most thorough treatments of this topic, rejects the term and suggests instead the adoption of the words ‘misinformation’ and ‘disinformation’. The report defines misinformation as the ‘inadvertent sharing of false information’, while disinformation constitutes ‘the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purpose of causing harm, or for political, personal or financial gain’.Footnote 22 These definitions align with proposals developed by the EU’s High Level Expert Group on Fake News and Online Disinformation. It conceives of disinformation as ‘verifiably false or misleading information … which cumulatively … is created, presented and disseminated for economic gain or to intentionally deceive the public … and may cause public harms [as] threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security’.Footnote 23 A third category of fake news, malinformation, refers to ‘genuine information shared with the intention to cause harm’,Footnote 24 such as defamatory content.

Despite a broad preference for the term ‘disinformation’, there is no consensus regarding its impacts, whether these impacts constitute harm; who or what is being harmed, and how to address such harms. Counselling against an ‘overdose of US perspectives’,Footnote 25 Bodo et al. contend that worries over disinformation amount to a moral panic, which is said to originate in US political culture and in the dominance of US perspectives on disinformation, and which is not applicable beyond the US.Footnote 26 Meyer and Marsden also assert that ‘evidence of large-scale harm is still inconclusive in Europe’.Footnote 27 In particular, there is a dearth of verifiable empirical evidence, which could demonstrate a significant effect of disinformation campaigns on electoral outcomes. However, this lack of empirical evidence does not diminish widespread concerns about the politically dangerous and potentially harmful impact of disinformation on democracy and on the integrity of electoral processes. For example, the EU regards online disinformation practices as ‘public harms’ and ‘threats to our way of life’.Footnote 28 These harms and threats are said to undermine trust and confidence in democracy, in public discourse, and in human rights.Footnote 29 Within public political discourse, concerns about disinformation harms have conjoined with broader worries over online harms, adding force to calls for the regulation of online content. To date, the debates have centred on harms to individuals regarded as vulnerable, specifically individuals with protected characteristics such as children, women, LGBTQ+ people, or people from ethnic minority backgrounds – social demographics who are frequently subjected to misogynistic, homophobic, transphobic, or racist hate speech online, or to online sexual abuse or threats of violence.Footnote 30 An emerging consensus that ‘the online and offline worlds cannot neatly be separated’,Footnote 31 that what is prohibited offline should be prohibited online, underpins the deliberations about the regulation of online communications.

However, the format and precise modalities of regulation remain contested and have emerged as a key challenge. Commenting on broader attempts to come up with a suitable regulatory design, Lillian Edwards asks whether we should ‘regulate by law … refuse to regulate till a clear path can be seen, or …turn to soft law, self-regulation, “co-regulation”, codes of conduct, technical standards, trustmarks, ethical charters, user democracy, who knows?’.Footnote 32 While attempts to regulate the dissemination of illegal content have turned to criminal law,Footnote 33 there are no quick and easy fixes that can offer effective, meaningful, and lawful ways to deal with disinformation. Designing regulatory instruments that can address the threats and harms posed by online disinformation generate seemingly intractable problems, which converge on three aspects: first, the diffuse and opaque nature and extent of disinformation harms and their typically intangible effects on society and on societal values such as human rights and democracy complicate regulatory efforts.Footnote 34 Scholarship on the societal harms of new technologies is only beginning to emerge,Footnote 35 while, as highlighted earlier, the harms caused by disinformation, notwithstanding significant normative concerns, remain empirically unproven. Second, disinformation operates in a novel information landscape, which lacks traditional (editorial) gatekeepers, creates online filter bubbles, and facilitates the spread of online (dis-)information to previously unimaginable levels and across jurisdictional boundaries. National regulatory landscapes have been described as opaque and fragmented, with overlaps and gaps,Footnote 36 while the speed and global reach with which the disinformation ‘infodemic’ infects public discourse limits the effectiveness of national regulation and requires instead collaboration,Footnote 37 and the difficult work of consensus building, at international, or at the very least regional, level.Footnote 38 There is also considerable unease about the role of private companies in the regulatory architecture, for example, whether they should be tasked with the sensitive role of regulating, and possibly censoring, online content. Third, tools such as algorithmic content moderation can be used to remove or block content, but these are blunt instruments that lack contextual understanding, such as the ability to distinguish satire from harmful disinformation or illegal content.Footnote 39 Moreover, different platforms and search engines operate differently and may require bespoke regulatory tools. For example, while X (formerly Twitter) is an open platform, others, such as WhatsApp, offer end-to-end encryption, making them less transparent but no less effective in the spread of disinformation.Footnote 40

These are important considerations for any analysis of disinformation, but this chapter’s main concern, and the focus of Section 15.3, is the linkage between disinformation and human rights that crystallises around the right to freedom of expression. To preview my argument, the ill-considered regulation of online content, including disinformation, may lead to potentially unlawful, unnecessary, and disproportionate interferences with the right to freedom of expression.Footnote 41 Rather than addressing the threats posed by disinformation, such interferences may generate new harms: they may undermine the functioning and values of the democratic processes that critics of unregulated speech worry about. Therefore, online disinformation is not amenable to broad-brush regulation, a factor that adds substantially to the difficulty of responding effectively to its harmful impact.

15.3 Freedom of Expression, Democracy, and Online Disinformation

The challenges posed by the spread of online disinformation are significant, but they should not deter efforts to develop regulatory instruments. As one commentator quipped, ‘the time for simply admiring the problem is over’.Footnote 42 However, as already stated, regulating online content, including disinformation, faces major hurdles. One such hurdle stems from states’ obligations with respect to the right to freedom of expression: these obligations generate a complex terrain for regulatory interventions into disinformation and pose significant barriers to actions that could be construed as interfering in human rights. Drawing on the framework of the ECHR, this section plots how the right to freedom of expression intersects with the spread of online disinformation. The discussion begins with an exposition of the right to freedom of expression in Europe’s regional human rights regime before problematising the legal and conceptual limitations of such a focus. The discussion demonstrates, first, that the right to freedom of expression limits states’ scope to regulate disinformation. Second, it will be argued that reading disinformation harms exclusively through the lens of freedom of expression does not suffice. The section proposes instead a nuanced human rights analysis, which attends to the impact of disinformation on diverse groups and which considers how disinformation impacts a range of other human rights beyond freedom of expression. Third, despite disinformation’s threat to human rights, I suggest that human rights law does not provide sufficient protection from the human rights harms caused by online disinformation.

The right to freedom of expression is enshrined in the system of international human rights law, which emerged in the aftermath of the Second World War. This right imposes negative and positive obligations on states to respect, protect and promote human rights.Footnote 43 Complementing international legal obligations, European human rights provisions for freedom of expression derive from Article 10 of the ECHR and from the jurisprudence of the ECtHR (or Strasbourg Court). Article 10 stipulates that:

  1. 1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.

  2. 2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.Footnote 44

The protections provided by Article 10 are not limited to speech and cover instead a wide range of expressions. These include artistic and commercial expressions, the publication of photographs, forms of conduct, rules governing clothing, and the use of the ‘Like’ button or similar expressions on social media networks. Article 10 also protects media freedom and grants a limited margin of appreciation with respect to interferences with journalistic expressions.Footnote 45 Moreover, freedom of expression extends to different modes of receiving and imparting information and disseminating one’s right to expression.Footnote 46 However, not all forms of expression are granted equal protection. Two related aspects are noteworthy in this context: first, the ECtHR presumes a hierarchy of expressions, which accords political speech the highest form of protection, followed by artistic and commercial speech.Footnote 47 Second, freedom of expression has special significance within the context of a democratic society. This principle is enshrined in the ECHR Preamble, which confirms a ‘profound belief in those fundamental freedoms which are … best maintained on the one hand by an effective political democracy and on the other by a common understanding and observance of the human rights upon which they depend’.

The importance of the right to freedom of expression within the system of ECHR rights is reflected in the case law of the ECtHR.Footnote 48 The leading case of Handyside v. the UK (1976) established that freedom of expression is ‘[i]ndissociable from democracy’,Footnote 49 and ‘one of the essential foundations of … a [democratic] society, one of the basic conditions for its progress and for the development of every man’. This assertion is stressed in an often-cited section in Handyside, which asserts that freedom of expression

is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society’. … every ‘formality’, ‘condition’, ‘restriction’ or ‘penalty’ imposed in this sphere must be proportionate to the legitimate aim pursued.Footnote 50

Subsequent judgments have reinforced the view that ‘freedom of political debate is at the very core of the concept of a democratic society which prevails throughout the Convention’.Footnote 51 For example, in Lingens v. Austria (1986), the Strasbourg Court proclaims that:

freedom of expression … constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfilment. … it is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb.Footnote 52

That Article 10 ‘enjoys a very wide scope, whether with regard to the substance of the ideas and information expressed, or to the form in which they are conveyed’,Footnote 53 is further emphasised in Mathieu-Mohin and Clerfayt v. Belgium (1987), which reiterates the ECHR Preamble’s link between fundamental human rights and freedoms as ‘best maintained by “an effective political democracy”’,Footnote 54 and which enshrines in particular the ‘prime importance’ of free expression in free elections.Footnote 55 Bowman v. the United Kingdom (1998), which draws on Lingens v. Austria (1986), further emphasises the protection of freedom of expression and of protection of political speech, described as ‘the bedrock of any democratic system’:

Free elections and freedom of expression, particularly freedom of political debate … are inter-related and operate to reinforce each other … freedom of expression is one of the ‘conditions’ necessary to ‘ensure the free expression of the opinion of the people in the choice of the legislature’ … For this reason, it is particularly important in the period preceding an election that opinions and information of all kinds are permitted to circulate freely.Footnote 56

The protection of the right to freedom of expression, its special role in democratic societies, and the extension of this right to different modes of imparting and receiving information has also been confirmed in a series of cases relating to digital communication, specifically the digital dissemination of political speech.Footnote 57 Flipping the claim that what is prohibited offline should be prohibited online, one may read the ECtHR jurisprudence as an assertion of the claim that what should be protected offline should be protected online. In its ‘Guide to Article 10’, the Court confirms the ‘innovative character of the Internet’ and states that ‘user-generated expressive activity on the Internet provides an unprecedented platform for the exercise of freedom of expression’.Footnote 58 The wide scope of Article 10, together with the principles articulated in the ECHR Preamble and in subsequent ECtHR jurisprudence, suggest that online disinformation may not be inherently unlawful and may, in fact, be protected in accordance with Article 10. Therefore, attempts to regulate online disinformation may constitute an interference with the right to freedom of expression. As is well known, interferences into Article 10, for example in the interest of national security, public safety, or other matters, as specified in Article 10(2), must be assessed against the ECtHR’s tripartite test of legality, proportionality, and necessity; they must also consider the additional import bestowed on political speech. These protections impose limits on interferences with the right to freedom of expression, and it is reasonable to surmise that these limits extend to attempts at interference with online disinformation.

Human rights law, specifically Article 10, creates a knotty problem for tackling online disinformation: as argued earlier, there are compelling normative concerns about the harmful impact of disinformation on human rights, yet we may plausibly conclude that disinformation can avail itself of the protections offered by Article 10. Addressing this problem requires a shift in perspective. Two aspects merit particular attention. First, although the regulation of disinformation risks disproportionate and unlawful interference in the right to freedom of expression, unfettered speech, whether offline or online, can also harm the wider communication ecosystem by silencing minoritarian voices.Footnote 59 Judit Bayer argues that ‘paradoxically from the perspective of Article 10 of ECHR, freedom of speech was to be restricted with the objective to preserve a sound informational environment; because pluralism of views, and ultimately the democratic process would otherwise have been distorted by the speech in question’.Footnote 60 Acknowledging these broader effects of disinformation calls for more granular analyses, which study how disinformation impacts the right to freedom of expression for a diverse range of individuals and groups. Online disinformation can generate structural conditions, which silence and exclude marginalised voices, and thus restrict their enjoyment of the right to freedom of expression. Second, conjoining critical analyses of disinformation exclusively with the right to freedom of expression risks losing sight of the effects of disinformation on the wider communication and information ecosystem and on a wider range of human rights, including the right to privacy (Article 8 ECHR), freedom of assembly and association (Article 11), the prohibition of discrimination (Article 14), or the prohibition of an abuse of rights (Article 17). This calls for a broader engagement with human rights law, beyond Article 10, and, as will be discussed in the remainder of the chapter, with legal regimes that offer new or additional protections for human rights.

15.4 Microtargeting: Benefits and Harms to Human Rights and Democracy

What compounds concerns about the impact of online disinformation on human rights and democracy and complicates regulatory efforts is the practice of microtargeting. The term ‘microtargeting’ describes the surgical, selective, and frequently opaque dissemination of tailored political or commercial communication to pre-identified, typically homogeneous audiences.Footnote 61 Through the use of data analytics, microtargeting can generate audience profiles based on social demographics such as gender, age, or ethnicity, but also philosophical beliefs or political opinions. These segmented audiences, for example, of consumers or voters, can be strategically targeted with bespoke messages.Footnote 62 The data typically required for microtargeting is personal data, defined in Article 4(1) of the GDPR as:

any information relating to an identified or identifiable natural person (‘data subject’) … who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

How such personal data is acquired can vary. It can be provided by the data subject (‘provided data’), such as a person’s political opinions, which he or she may share on social media platforms. It can also be processed through the use of cookies or tracking pixels (‘observed data’). Or it can involve inferred data, which is based on probabilities emerging from the analysis of provided or observed data by finding ‘correlations between datasets and using these to categorise or profile people, e.g., calculating credit scores or predicting future health outcomes’.Footnote 63

Microtargeting has attracted significant public interest since the mid-2010s, when its deployment became associated with the practices of the data analytics company Cambridge Analytica.Footnote 64 Cambridge Analytica created data profiles of typically undecided voters, which aligned with their social media profiles, and which were used to target comparatively small cohorts of voters with selective and often inaccurate messages, especially on highly emotive issues such as immigration. However, it is important to stress that microtargeting is not inherently political in either content or objectives, and that it can be deployed for a range of purposes, including commercial goals.Footnote 65 This is because microtargeting techniques are equally amenable for commercial or political ends, or a combination of both.Footnote 66 Moreover, there is no inherent reason to associate microtargeting with disinformation: microtargeting is not intrinsically wedded to the spread of disinformation, while disinformation can proliferate without resorting to microtargeting practices. However, the conjoining of online disinformation with the practice of microtargeting adds to the significant concerns about harms to elections, to democratic practices and democratic political cultures, and to human rights. This also extends the remit of analysis, beyond a focus on freedom of expression, to include data protection and privacy issues.

Despite extensive scholarly and public political interest in microtargeting and its respective benefits and harms, its effectiveness remains contested. For example, Borgesius et al. remind us that voters do not live in digital bubbles and may not be receptive to microtargeted messages.Footnote 67 This may diminish the capacity of microtargeted messages to influence the outcome of elections. We may also surmise that companies that offer microtargeting services may exaggerate their value.Footnote 68 This lack of empirical evidence on the effectiveness of microtargeting could, potentially, alleviate concerns over its harms. However, suggestions that microtargeting, whether deployed in political or commercial marketing campaigns, may offer benefits to the sender and receiver of targeted messages, indicates a continued confidence in its usefulness.Footnote 69 One of its alleged benefits is said to derive from its shift from ‘broadcasting’, a term that depicts the wide-ranging spread of political or commercial messages to general audiences, to ‘narrowcasting’, which provides receivers, such as voters or consumers, with information on issues that selected audiences regard as relevant, and that speak directly to their interests, needs, or concerns.Footnote 70 For example, in the field of commercial advertising, microtargeting can support the marketing of goods and services to consumers in search of specific products, and reduce advertisement overload for those who are not interested in these products. Microtargeting can also support the transmission of public policy messages in fields such as health or welfare.Footnote 71 For example, microtargeting has been deployed to communicate tailored skin cancer prevention messages to young women using sunbeds,Footnote 72 or to disseminate information about welfare programmes to people living in deprived areas.Footnote 73

Targeting segmented audiences with bespoke messages is also said to offer distinct benefits in the democratic process. For example, microtargeting may engage hard-to-reach individuals and communities, typically those who are ‘switched off’ from the political process. (Re-)engaging sections of the electorate with political issues and electoral processes may offer non-partisan benefits that, rather than undermining democratic politics, may in fact strengthen democratic systems. However, its main advantage is said to lie in the partisan benefits to political campaigns. Although there is no consensus on whether microtargeting can mobilise undecided voters, there is a view that it can ‘activate the base … and improve partisan turnout’.Footnote 74 This explains why microtargeting has been deployed so widely in election campaigns and referenda.

Critical perspectives on microtargeting contend that a focus on empirical evidence about its effectiveness with respect to elections is blind to the substantial threats, including its weaponisation of personal data and its effect on democratic infrastructures.Footnote 75 These analyses ground their arguments in normative claims about the harmful effects of microtargeting that are bound up with three interrelated issues: first, the capacity of microtargeting to disseminate disinformation to selected audiences; second, its effects on the information ecosystem and on the right to freedom of expression; and third, the consequences for privacy and data protection. For example, there are concerns that microtargeted disinformation may suppress voter turnout, and that the use of microtargeting to mobilise small groups of voters, typically in swing states in US elections, can focus on so-called wedge issues. These are polarised issues that can frame and dominate public political discourse and undermine the coherence of the wider polity by detracting attention from issues that concern voters across the party-political spectrum.Footnote 76 These concerns intersect with worries over the impact of microtargeting on privacy, specifically with the harvesting of personal data, and attendant concerns over data protection, data security, and voter surveillance, based on the use of predictive analytics and the processing of inferred data.Footnote 77 There are additional normative concerns that the impact of microtargeting extends beyond individual privacy. For example, Bennett and Lyon contend that it generates collective and societal effects, which are ‘not just about privacy, but even more so about data collection and governance, freedom of expression, disinformation, and democracy itself’.Footnote 78 Zittrain asserts that what he calls ‘digital gerrymandering’ is ‘not a wrong to a given individual user, but rather to everyone, even non-users’.Footnote 79

Such worries about the collective and systemic effects of microtargeting also inform the work of Judit Bayer. Addressing the impact of microtargeting on democracy, she calls for a restriction of microtargeting, not because of its alleged manipulation of voters and interferences with the right to privacy, but because it fragments public discourse and threatens the democratic process. According to Bayer, microtargeting presents a double-harm: it comprises the harm of being targeted, but also the harm of not being targeted, which, according to her, constitutes a violation of informational rights, a potential ‘mass violation of human rights’ of all those who are not targeted.Footnote 80 She declares that the right to receive information and the right to freedom of expression are complementary: ‘[w]hen the right to receive information is violated, it is freedom of expression in its broader sense, which is violated.’Footnote 81 Conjoining the analyses offered by Bennett and Lyon, Zittrain, and Bayer, we may conclude that the profiling and microtargeting of online users, based on the ‘analysis of their data profile’ creates an information asymmetry,Footnote 82 which violates informational rights,Footnote 83 and shatters the ‘shared world’ of political deliberation.Footnote 84 A fragmented and opaque (dis-)information basis of public political discourse, which targets selected social media users with personalised (dis-)information based on data analytics techniques poses harm to the democratic polity, culminating in the ‘polarisation and fragmentation of the public sphere’.Footnote 85 It leaves in its wake citizens who are informationally isolated and atomised.

Despite these normative concerns about MOD’s harms to human rights and democracy, a consensus on how it should be regulated remains elusive. As highlighted in Section 15.3, human rights law creates significant barriers to interferences with freedom of expression; these barriers extend, potentially, to disinformation practices. There is as yet no ECtHR jurisprudence on the issue of disinformation,Footnote 86 and we may conclude that online content, including disinformation, and the mode of delivery – microtargeting – are protected under the broad umbrella of Article 10. Attention has turned instead to legal regimes beyond human rights law. Section 15.5 illustrates this shift in focus with respect to selected EU legal instruments.

15.5 Regulating MOD: The EU’s Legislative Actions

The notion of a ‘marketplace of ideas’ popularised by Justice Holmes’s dissenting judgment in Abrams v. United States implies that the self-regulatory capacity of a public realm, constituted by uncoerced dialogue, is sufficiently inoculated against the effects of harmful speech.Footnote 87 More than a century later, there are well-founded concerns that Holmes’s vision of self-regulation may not suffice to counter the harms generated by unfettered online content. Therefore, despite the affordances of online platforms and search engines to expand opportunities for freedom of expression and other human rights, there are persistent worries that online disinformation compounds harms to individuals and communities and exerts chilling effects on democracy and human rights.Footnote 88 It is against this backdrop that interest in the regulation of online expression and of platforms and search engines has garnered attention.Footnote 89 However, as discussed earlier in the chapter, balancing the right to freedom of expression, which includes the right to ‘offend, shock or disturb’,Footnote 90 with protection from the individual, collective, and societal harms generated by MOD poses seemingly intractable challenges. Regulating online expressions may not provide a sufficiently granular response to the specific threats posed by MOD. In fact, one could plausibly argue that content regulation constitutes a regulatory scattergun, which risks missing the desired aim – that of MOD – because it is blind to diverse types of online communications and their attendant set of challenges.

Recent EU policy and legislative initiatives promise more granular approaches to deal with this problem. They are anchored in EU primary law, including Article 16 of the Treaty on the Functioning of the EU and the Charter of Fundamental Rights (hereinafter the Charter), the EU’s premier human rights instrument. The latter includes the right to privacy (Article 7 of the Charter), the right to protection of personal data (Article 8 of the Charter), and the right to freedom of expression and information (Article 11 of the Charter). The EU’s legislative programme is supported by a series of agenda-setting policy recommendations, Communications, and soft law instruments, which offer additional, albeit ultimately non-enforceable, ways to regulate the dissemination of disinformation via the practices of microtargeting. They include the European Commission’s ‘2030 Digital Compass: the European way for the Digital Decade’,Footnote 91 the European Democracy Action Plan;Footnote 92 the Strengthened Code of Practice on Disinformation,Footnote 93 and the European Declaration on Digital Rights and Principles for the Digital Decade.Footnote 94 Also noteworthy are statements and opinions issued by the Article 29 Data Protection Working Party, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS).Footnote 95 Of key interest is an emerging suite of EU secondary law, which centres on the main elements involved in the dissemination of MOD, such as the regulation of data processing, the regulation of online platforms and search engines, the regulation of AI systems, and the regulation of political advertising. Despite differences in material scope, these laws share three features: first, they have direct horizontal effect, which imposes rights obligations not just on states or an emanation of a state, but also on non-state actors, including private companies. Second, they share loopholes, wide-ranging exemptions, and monitoring and enforcement gaps, which are compounded by the significant regulatory roles accorded to private actors. Third, they manifest an unresolved tension between the EU’s twin objectives of protecting the internal market and promoting innovation on the one hand, and its obligation to protect fundamental rights.

15.5.1 Regulating Data Processing: The GDPR

The use of data analytics in microtargeting underscores the imperative to protect personal data, as enshrined in Article 8 of the Charter, and highlights the importance of data protection laws in the regulation of online content. The GDPR, the EU’s most established and perhaps most widely known legislative instrument, centres on the processing of personal data (Article 4 of the GDPR), while its provisions pursue two goals: first, respect for fundamental rights and freedoms, including the right to protection of personal data; and second, the creation of harmonised rules within the EU internal market (Recital 2).Footnote 96 To deliver on these objectives, the GDPR put forward a series of data protection principles, including transparency, purpose limitation, data minimisation, accuracy, and integrity and confidentiality (Article 5),Footnote 97 which are intended to guide the processing of personal data. As stated in Recital 4, the GDPR ‘respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties’, including the right to freedom of expression (Article 11 of the Charter). Thus, data protection principles, and the right to the protection of personal data, are not absolute but must be balanced against other Charter rights.

The GDPR is ‘technologically neutral’ (Recital 15); it is not designed to regulate the dissemination of disinformation in general or the use of microtargeting techniques in particular. However, its provisions for the processing of personal data provide tools that can be used to regulate the microtargeting of voters via the use of inferred data. Four aspects are particularly noteworthy. First, the GDPR’s extraterritorial scope (Article 3) extends the application of the Regulation beyond the EU and may affect the dissemination of microtargeted online content that originates outside the EU. Second, it connects the right to the protection of personal data with the right to freedom of expression and information, including processing for journalistic purposes (Article 85(1)). Third, it prohibits the processing of ‘special categories of data’, including that of racial or ethnic origin, political opinion, philosophical beliefs, or health data (Article 9(1)). And fourth, it includes the right not to be subject to a decision based solely on automated processing, including profiling (Article 22(1)). Articles 9(1) and 22(1), separately as well as combined, appear to offer robust protection against the harmful effects of data analytics practices, including those that underpin microtargeting. However, each of the two articles is diluted with a range of exemptions. For example, Article 9(2d) provides exemptions for the processing of special category data for associations with political aims or where data subjects have made their personal data public (Article 9(2e)).

Analysing the GDPR’s provisions, Blasi Casagran and Vermeulen have also identified compliance issues with respect to political microtargeting, specifically with the data protection principles as outlined in Article 5 of the GDPR, including lawful processing, purpose limitation, data minimisation, data accuracy, and data accountability.Footnote 98 Their concerns align with those of the EDPB, which highlights the risks of microtargeting not only for ‘individuals’ rights to privacy and data protection, but also to wider trust in the integrity of democratic processes themselves.Footnote 99 The EDPB has counselled that derogations from Article 9 ‘should be interpreted narrowly, as it cannot be used to legitimate inferred data’.Footnote 100 There are additional concerns over the enforcement of GDPR provisions, and tensions between the way that human rights protection is balanced against other interests, including the development of harmonised rules across the EU’s internal market.

15.5.2 Regulating Online Platforms: The DSA

While the GDPR focuses on the processing of personal data, the DSA, which entered into force on 19 November 2022, is the EU’s bespoke tool for dealing with the business models and practices of online service providers.Footnote 101 Like other EU legislative actions, the DSA speaks to the EU’s twin concerns of protecting the internal market via a harmonisation of ‘uniform, effective and proportionate mandatory rules’ (Recital 4) while also protecting fundamental rights enshrined in the EU Charter. Its main objectives are ‘to prevent illegal and harmful activities online and the spread of disinformation’.Footnote 102 Recognising the ‘inherently cross-border nature of the internet’ (Recital 2), the DSA establishes a graduated and tailored approach, which imposes increasingly stringent due diligence obligations on intermediary service providers, depending on their size and reach. The most stringent obligations are imposed on very large online platforms and very large online search engines, which Article 33(1) defines as having a number of average monthly active recipients of the service in the EU equal to or higher than 45 million.

The regulatory guidelines presented in the DSA are technology neutral. Its primary concern lies with a ‘safe, predictable and trusted online environment’ (Recital 9) through a graded, risk-based approach. Three categories of risk are identified: first, the dissemination of illegal content and conduct of illegal activities, such as child sexual abuse or terrorist content; second, the impact of digital services on the exercise of fundamental human rights; and third, the manipulation of platform services which have an impact, among other things, on civic discourse and electoral processes. The DSA requires intermediary service providers to have due regard to relevant international standards for the protection of human rights, including freedom of expression, media freedom, and pluralism (Recital 47). It acknowledges the wider impact of disinformation on fundamental rights (Recitals 9 and 86) and on democratic processes (Recital 82), and it recognises the systemic risks of disinformation and its impact on society and democracy (Recital 104), including the risk of online services to ‘civic discourse and electoral processes, and public security’ (Article 34(1)(c)). More specifically, the DSA acknowledges risks stemming from the use of online advertisements and associated targeting techniques. It stipulates that ‘providers of online platforms should not present advertisements based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679, using special categories of personal data referred to in Article 9(1) of that Regulation, including by using profiling categories based on those special categories’. (Recital 69). Despite its recognition of the harm from disinformation, references to disinformation are relegated to the non-binding recitals. With its focus on illegal activities, the DSA offers few effective interventions into disinformation practices.

15.5.3 Regulating Techniques: The AIA

Broadly mirroring the approach of the DSA, the AIA focuses on the regulation of AI systems.Footnote 103 An AI system is defined as a ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ (Article 3(1)). The Act’s purpose, as outlined in Recital 1, is ‘to improve the functioning of the internal market by laying down a uniform legal framework’ that strikes a balance between fostering the development of a single market for lawful, safe, and trustworthy AI applications while ensuring the protection of fundamental rights and societal values, including democracy and the rule of law. To deliver on this goal, the AIA adopts a risk-based approach, which categorises AI systems into four risk levels, ranging from unacceptable and thus prohibited risks through high risk and limited risk to minimal or no risk. Its legally binding provisions delineate obligations and responsibilities for AI developers and users, seeking to foster an AI landscape that promotes human-centric and trustworthy AI systems and ensures a high level of protection of health, safety, and fundamental rights (Article 1(1)).

Although the Act has not been designed as a bespoke human rights instrument – the AIA refers to fundamental rights – it acknowledges concerns about AI’s impact on rights at various stages of the AI lifecycle, from design and development to the placing of AI systems on the market. Its provisions for human rights protection are part of a broader mission, which seeks to harmonise the legal regulation of AI across the EU, and foster innovation aimed at establishing the EU as a ‘global leader in the development of secure, trustworthy and ethical AI’ (Recital 8). Concerns over AI’s impact on human rights feature prominently in the (non-binding) recitals and in the articles. The Act promotes alignment with EU values of ‘respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights’ (Recital 28). To operationalise these concerns, the Act introduces a series of regulatory governance techniques. These include an obligation to conduct fundamental rights impact assessments for high-risk AI systems,Footnote 104 which apply to AI deployers governed by public law or private entities providing public services (Article 27), transparency obligations (Article 50), and the reporting of serious incidents (Article 73). The Act also stipulates the voluntary application of codes of conduct, including adherence to EU guidelines on ethical AI (Article 95(2)(a)), facilitating inclusive and diverse AI design through inclusive and diverse development teams and stakeholder participation (Article 95(2)(d)), assessing and preventing the negative impact of AI on vulnerable groups and on gender equality (Article 95(2)(e), and establishing an advisory forum to the AI Board with civil society representation (Article 67(2)).

There is a recognition of the potentially maleficent impact of high-risk AI systems on democracy and democratic processes, primarily in the recitals. Recital 110 addresses the capabilities of general-purpose AI models, specifically their capacity to facilitate the spread of disinformation and pose threats to democratic values and human rights. Developing this theme, Recital 120 considers how AI systems deployed by very large online platforms and very large online search engines can disseminate artificially generated or manipulated content, with ‘actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation’ (see also Recital 136). The AIA makes specific reference to deepfakes, defined as ‘AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful’ (Article 3(60)). It imposes a specific obligation on the deployers of deepfakes to disclose that the content has been artificially generated or manipulated if text is published ‘with the purpose of informing the public on matters of public interest’ (Article 50(4)). Overall, though, there is limited engagement with the issue of microtargeted disinformation, and it is fair to conclude that the AIA’s provisions reveal a substantial gap between its normative commitments to protect human rights and the exemptions and regulatory loopholes in the legally binding provisions.

15.5.4 Regulating Political Advertising: The Regulation on the Transparency and Targeting of Political Advertising

Despite the huge public and scholarly interest in the DSA and the AIA, neither of these two instruments offer wide-ranging or bespoke regulatory tools to address the problem of MOD. The new Regulation on the Transparency and Targeting of Political Advertising (TTPA), which entered into force on 13 March 2024 and applies from 10 October 2025, promises to fill this gap by becoming the EU’s most tailored legislative response to the challenge of the microtargeting of political advertising to date. A relatively short regulation – it consists of thirty articles – the TTPA seeks to create synergies with the GDPR and the DSA. For example, the TTPA utilises the GDPR’s provisions for the processing of personal data. Building on the DSA, the TTPA foregrounds the principle of transparency and presents a risk-based approach. Its two objectives are to contribute to ‘the proper functioning of the internal market for political advertising and related services’ (Article 1(4a)) and ‘to protect the fundamental rights and freedoms’ enshrined in the Charter, ‘in particular the right to privacy and the protection of personal data’ (Article 1(4b)). It seeks to achieve these objectives by creating ‘harmonised rules, including transparency and related due diligence obligations, for the provision of political advertising and related services’ (Article 1(1a)) and ‘harmonised rules on the use of targeting techniques and ad-delivery techniques that involve the processing of personal data in the context of the provision of online political advertising’ (Article 1(1b)).

Underpinning the TTPA’s provisions is a recognition that ‘[p]olitical advertising can be a vector of disinformation’ (Recital 4) and that ‘the misuse of personal data through targeting, including microtargeting … may present particular threats to legitimate public interests’ (Recital 6), including threats to fundamental rights, such as freedom of expression, privacy, and equality. Three aspects are significant in this context: first, the TTPA establishes a direct link between microtargeting, disinformation, and political advertising, which is defined as ‘the preparation, placement, promotion, publication or dissemination, by any means, of a message’ (Article 3(1)). Second, the TTPA recognises that the targeting and amplification techniques, which disseminate political advertising, can negatively impact the democratic process and exploit the vulnerabilities of data subjects, with ‘specific and detrimental effects on citizens’ fundamental rights and freedoms with regard to the processing of their personal data and their freedom to receive objective information’ (Recital 74). Targeting or amplification techniques are ‘techniques that are used either to address a tailored political advertisement only to a specific person or group of persons or to increase the circulation, reach or visibility of a political advertisement’ (Article 2(8)). These techniques, which are used to disseminate political advertising, should be prohibited unless explicit consent by the data subject has been given and appropriate safeguards are in place. In this respect, the TTPA seeks to close some of the derogations established in Article 9(2) of the GDPR, but it does not stipulate a prohibition of targeting techniques, provided that explicit consent from the data subject has been provided and that the targeting does not involve profiling (Article 18(1c)). Political parties, foundations, associations, or other non-profit bodies are exempt from these requirements provided that their communications are based on subscription data (Article 18(3)). Article 19 specifies additional transparency requirements, including publications of internal policies, record keeping, and internal annual risk assessments. Third, the TTPA rejects interferences with the substantive content of political messages and seeks to protect the content of political advertising from unlawful interference.

The TTPA is designed to go further than other EU instruments with respect to the practice of microtargeting. As distinct from the DSA, it is not focused on the regulation of illegal online content. Moreover, it expounds the wider impact of targeting and amplification techniques on a broader range of fundamental rights and on democracy itself. However, by exempting the editorial freedom of the media (Recital 29) and ‘the sharing of information through electronic communication services, such as electronic message services … provided that no political advertising service is involved’ (Recital 48), the TTPA’s scope excludes significant avenues for the dissemination of targeted disinformation; for example through platforms such as WhatsApp.Footnote 105 Moreover, its requirement for informed consent with respect to targeting and amplification is ill-suited to the practice of microtargeting, especially if it involves interferences by foreign states or bad actors. This will require further thought and new regulatory tools.

15.6 Conclusion

This chapter has surveyed the phenomenon of MOD and critically assessed whether human rights law is equipped to respond to this complex and complicated challenge. Through a focus on selected EU legislative instruments, and being cognisant of the broader legal-normative order of the ECHR and the jurisprudence of the ECtHR, the discussion has identified considerable hurdles with respect to the regulation of MOD. Given the wide-ranging protections for the right to freedom of expression, the chapter has contended that the scope for human rights law to address the harms generated by MOD is limited. The chapter began by expounding the concept of disinformation before discussing the right to freedom of expression and the limitations it poses on regulating disinformation – a form of communication not normally regarded as unlawful. Building on this discussion, the chapter then proceeded to examine the practice of microtargeting and attendant data analytics and their effects on the right to privacy. A survey and analysis of recent legislative initiatives in the EU illustrated how human rights protection has extended beyond human rights law; for example, by regulating the practices of very large online platforms, or by harnessing the tools of data protection law. While this ‘European approach’ can offer important lessons for the protection of human rights,Footnote 106 the chapter also highlighted important shortcomings related to issues such as enforcement and the wide-ranging scope for derogations and exemptions.

The regulation of MOD remains an evolving area. As emphasised in scholarly work and in policy debates, legal regulation is no panacea to counter the harms to human rights and democracy caused by fake news. The chapter therefore echoes the recommendation of Meyer and Marsden, who call for a holistic approach to disinformation that includes the development of digital literacy across all age groups.Footnote 107 This focus on digital literacy should be aligned with a broader political literacy strategy, which supports the capacity of citizens to access and analyse information and to participate in political discourse, both online and offline. As the recent push against the regulation of tech companies suggests,Footnote 108 there are worrying changes in the international political climate, which propel an anti-regulatory move. Whether EU law can withstand these powerful global actors remains to be seen.Footnote 109

Footnotes

I wish to thank Tiina Pajuste and Oscar Raúl Puccinelli for their generous and thoughtful comments on an earlier version of this chapter.

1 See, e.g., BBC World, ‘Fake news in 2016: what it is, what it wasn’t, how to help’, 30 December 2016, www.bbc.co.uk/news/world-38168792.

2 See, e.g., D. M. West, ‘How disinformation defined the 2024 election narrative’, 7 November 2024, Brookings Institution, www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/. See also M. Susi et al. (eds.), Governing Information Flows During War: A Comparative Study of Content Governance and Media Policy Responses After Russia’s Attack against Ukraine (Hamburg: Verlag Hans-Bredow-Institut 2022), graphite.page/gdhrnet-wp4/.

3 In 2017, ‘fake news’ was voted word of the year by the dictionary publisher Collins. See BBC News, ‘What is 2017’s word of the year?’, 2 November 2017, www.bbc.co.uk/news/uk-41838386#:~:text=A%20phrase%20consistently%20in%20the,he%20railed%20against%20the%20media.

4 Information Commissioner’s Office, ‘Microtargeting’, https://ico.org.uk/for-the-public/microtargeting/.

5 All references to freedom of expression will refer to the ECHR. The Convention provides a benchmark for human rights protection across the member states of the Council of Europe and has special legal standing within the EU.

6 I. Nenadić, ‘Unpacking the “European approach” to tackling challenges of disinformation and political manipulation’ (2019) 8 Internet Policy Review 4, 1–22, https://policyreview.info/articles/analysis/unpacking-european-approach-tackling-challenges-disinformation-and-political.

7 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR).

8 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services Act).

9 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on AI and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (AIA).

10 Regulation (EU) 2024/900 of the European Parliament and of the Council of 13 March 2024 on the transparency and targeting of political advertising.

11 A. Bradford, The Brussels Effect: How the European Union Rules the World (Oxford: Oxford University Press, 2020).

12 UN High Commissioner for Human Rights (UNHCHR), ‘Impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests’, UN Doc. A/HRC/44/24, 2020), 24 June 2020, 3–4.

13 House of Commons Digital, Culture, Media and Sports Committee, ‘Disinformation and “fake news”: Final Report’, HC 1791 (House of Commons 2019); henceforth referred to as DCMS Report. See also House of Lords Select Committee on Communications, ‘Regulating in a digital world. 2nd Report of Session 2017–19’, HL Paper 299 (House of Lords 2019).

14 C. Cadwalladr, ‘The great British Brexit robbery: how our democracy was hijacked’, 7 May 2017, The Guardian, www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy.

15 C. H. Powell et al., ‘Disinformation’, in T. Pajuste (ed.), Specific Threats to Human Rights Protection from the Digital Reality (Tallinn: Tallinn University, 2022), https://graphite.page/GDHRNet-threats-to-human-rights-protection/assets/documents/GDHRNet-ThreatsReport-EditedVolume.pdf.

16 See e.g., AVAAZ, ‘How Facebook can flatten the curve of the coronavirus infodemic’, 15 April 2020, https://secure.avaaz.org/campaign/en/facebook_coronavirus_misinformation/; Center for Humane Technology, ‘Ledger of harms’ (2021), available at https://ledger.humanetech.com.

17 See e.g., D. K. Citron and R. Chesney, ‘Deep fakes: a looming challenge for privacy, democracy, and national security’ (2019) 107 California Law Review 6, 1753–819; B. Paris and J. Donovan, ‘Deepfakes and cheap fakes: the manipulation of audio and visual evidence’ (2019) Data & Society Research Institute, 18 September.

18 B. Schippers, ‘Artificial intelligence and democratic politics’ (2020) 11 Political Insight 1, 32–5, at 33.

19 BBC Bitesize, ‘A brief history of fake news’, www.bbc.co.uk/bitesize/articles/zwcgn9q. See also Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, ‘Disinformation and freedom of opinion and expression’, UN Doc. A/HRC/47/25, 13 April 2021.

20 BBC Four, ‘Ian Hislop’s fake news: a true history: moon hoax’, 26 September 2019, www.bbc.co.uk/programmes/p07pd8dg.

21 DCMS Report; House of Lords Select Committee on Communications, ‘Regulating in a digital world’.

22 Government response to DCMS Interim Report, quoted in DCMS Report, 10.

23 European Commission, ‘A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation’ (Brussels: European Union, 2018), https://digital-strategy.ec.europa.eu/en/library/final-report-high-level-expert-group-fake-news-and-online-disinformation. See also European Commission, ‘The strengthened code of practice on disinformation 2022’, 16 June 2022, https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation; C. H. Powell et al., ‘Disinformation’.

24 C. Wardle and H. Derakhshan, ‘Information disorder: toward an interdisciplinary framework for research and policymaking’, Council of Europe report, EU Doc. DGI(2017)09, 2017), 5. See also C. H. Powell et al., ‘Disinformation’, p. 34.

25 B. Bodo, N. Helberger, and C. H. de Vreese, ‘Political micro-targeting: a Manchurian candidate or just a dark horse?’ (2017) 6 Internet Policy Review 4, 1–13, at 9.

26 Footnote Ibid., p. 8.

27 T. Meyer and C. Marsden, Regulating Disinformation with Artificial Intelligence: Effects of Disinformation on Freedom of Expression and Media Pluralism (Brussels: European Parliamentary Research Service, 2019), pp. 3–4.

28 European Commission, ‘The Strengthened Code of Practice on Disinformation 2022’.

29 J. Bayer et al., The Fight Against Disinformation and the Right to Freedom of Expression (Brussels: European Union, 2021), p. 13, www.europarl.europa.eu/RegData/etudes/STUD/2021/695445/IPOL_STU(2021)695445_EN.pdf.

30 D. K. Citron, Hate Crimes in Cyberspace (Cambridge, MA: Harvard University Press 2014).

31 Law Commission, ‘Modernising communication offences’, Law Com. No. 399 (2021), 3, https://assets.publishing.service.gov.uk/media/61ba022ad3bf7f05539de6f5/Modernising-Communications-Offences-2021-Law-Com-No-399.pdf.

32 L. Edwards, ‘Introduction’, in L. Edwards (ed.), Law, Policy and the Internet (Oxford: Hart Publishing, 2019), p. l.

33 Law Commission, ‘Modernising communication offences’. UK Online Safety Act 2023, www.legislation.gov.uk/ukpga/2023/50. On the role of criminal law, see also R. K. Helm and H. Nasu, ‘Regulatory responses to “fake news” and freedom of expression: normative and empirical evaluation’ (2020) 21 Human Rights Law Review 2, 322–5.

34 S. Wood et al., Review of Literature Relevant to Data Protection Harms (London: Plum Consulting, 2022), pp. 4–5, 24–5, https://ico.org.uk/media/about-the-ico/documents/4020142/plum-review-of-literature-relevant-to-data-protection-harms-v1-202203.pdf; Information Commissioner’s Office ‘Overview of data protection harms and the ICO’s taxonomy’ (2022), https://ico.org.uk/media/about-the-ico/documents/4020144/overview-of-data-protection-harms-and-the-ico-taxonomy-v1-202204.pdf, pp. 5–7.

35 For a broader consideration of the effects of AI on democracy and the rule of law, see, e.g., N. A. Smuha, ‘Beyond the individual: governing AI’s societal harm’ (2021) 10 Internet Policy Review 3, Article 1574.

36 House of Lords Select Committee Report, ‘Regulating in a digital world’, pp. 3, 9.

37 Helm and Nasu, ‘Regulatory responses to “fake news” and freedom of expression’, p. 325.

38 A. Murray ‘“The time has come for international regulation on artificial intelligence” – an interview with Andrew Murray’, 25 November 2020, OpinioJuris, http://opiniojuris.org/2020/11/25/the-time-has-come-for-international-regulation-on-artificial-intelligence-an-interview-with-andrew-murray/.

39 T. Dias Oliva, ‘Content moderation technologies: applying human rights standards to protect freedom of expression’ (2020) 20 Human Rights Law Review 4, 607–40.

40 E. Magrani, Hacking the Electorate: On the Use of Personal Data in Political Campaigning (Berlin: Konrad Adenauer Stiftung, 2020), p. 15.

41 Helm and Nasu, ‘Regulatory responses to “fake news” and freedom of expression’.

42 B. Epstein, ‘Why it is so difficult to regulate disinformation online’, in W. L. Bennett and S. Livingston (eds.), The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States (Cambridge: Cambridge University Press, 2020), pp. 190–210, at 203.

43 For a general overview see, e.g., I. Bantekas and L. Oette, International Human Rights Law and Practice (Cambridge: Cambridge University Press, 2024); O. de Schutter, International Human Rights Law: Cases, Materials, Commentary (Cambridge: Cambridge University Press, 2019).

44 Convention for the Protection of Human Rights and Fundamental Freedoms, 4 November 1950, Council of Europe Treaty Series 005, Article 10.

45 On interference with journalistic expressions, see, e.g., Lingens v. Austria, Application no. 9815/82, Judgment of 8 July 1986.

46 European Court of Human Rights, Guide on Article 10 of the European Convention on Human Rights: Freedom of Expression (Strasbourg: Council of Europe/European Court of Human Rights, 2022), pp. 12–13.

47 Bayer et al., The Fight Against Disinformation and the Right to Freedom of Expression, p. 26.

48 See Footnote ibid. for a survey of relevant ECtHR case law. See also European Court of Human Rights, Guide on Article 10 of the European Convention on Human Rights.

49 Footnote Ibid., p. 11.

50 Handyside v. UK, Application no 5493/72, Judgment of 7 December 1976, para. 49 (emphasis added).

51 Lingens v. Austria, Application no. 9815/82, Judgment of 8 July 1986, para. 42.

52 Footnote Ibid., paras. 41–2.

53 European Court of Human Rights, Guide on Article 10 of the European Convention on Human Rights, p. 12.

55 Mathieu-Mohin and Clerfayt v. Belgium, Application no. 9267/81, Judgment of 2 March 1987, para. 47.

56 Bowman v. the United Kingdom, Case no. 141/1996/762/959, Judgment of 19 February 1998, para. 42.

57 See, e.g., Delfi AS v. Estonia [GC], Application no. 64569/09, Judgment of 16 June 2015; Cengiz and Others v. Turkey, Application nos. 48226/10 and 14027/11, Judgment of 1 December 2015; Ahmet Yıldırım v. Turkey, Application no. 3111/10, Judgment of 18 December 2012.

58 European Court of Human Rights, Guide on Article 10 of the European Convention on Human Rights, p. 107.

59 See, e.g., Citron, Hate Crimes in Cyberspace. For a broader discussion see, e.g., M. Beard, ‘The public voice of women’ (2014) 36 London Review of Books 6, 11–4. See also J. Bayer, ‘Double harm to voters: data-driven micro-targeting and democratic public discourse’ (2020) 9 Internet Policy Review 1, 1–17, at 7.

60 Bayer, ‘Double harm to voters’, p. 7.

61 Information Commissioner’s Office, ‘Microtargeting’.

62 C. J. Bennett and D. Lyon, ‘Data-driven elections: implications and challenges for democratic societies’ (2019) 8 Internet Policy Review 4, Article 1433, 3–4.

63 Information Commissioner’s Office, ‘Guidance for the use of personal data in political campaigning’ (2021), http://ico.org.uk/for-organisations/direct-marketing-and-privacy-and-electronic-communications/guidance-for-the-use-of-personal-data-in-political-campaigning-1/, 68. See also DCMS report, 17–8. Information Commissioner’s Office, ‘Big data, artificial intelligence, machine learning and data protection’ (2017), https://ico.org.uk/media2/migrated/2013559/big-data-ai-ml-and-data-protection.pdf, 12–3.

64 J. Zhu and R. Isaacs, ‘Campaign microtargeting and AI can jeopardize democracy’, 29 May 2024, LSE British Politics and Policy, https://blogs.lse.ac.uk/politicsandpolicy/campaign-microtargeting-and-ai-can-jeopardize-democracy/. DCMS Report, 17. For a recent account by a protagonist involved in CA, see C. Wylie, Mindf*ck: Inside Cambridge Analytica’s Plot to Break the World (London: Profile Books, 2019).

65 Demarcating political from non-political disinformation is a complex issue, which would lead beyond the scope of this chapter. Issues that present, at first blush, as non-political, e.g., debates on the use of COVID-19 vaccinations, can relate political issues and agendas. See, e.g., AVAAZ, ‘How Facebook can flatten the curve of the coronavirus infodemic’.

66 I. S. Rubinstein, ‘Voter privacy in the age of big data’ (2014) Wisconsin Law Review 4, 861–936.

67 F. J. Zuiderveen Borgesius et al., ‘Online political microtargeting: promises and threats for democracy’ (2018) 14 Utrecht Law Review 1, 82–96, at 91–2.

69 Footnote Ibid., p. 82.

70 Rubinstein, ‘Voter privacy in the age of big data’, p. 882; see also Zuiderveen Borgesius et al., ‘Online political microtargeting’.

71 Bayer, ‘Double harm to voters’, p. 9.

72 A. Gaysynsky, K. Heley, and W. Y. S. Chou, ‘An overview of innovative approaches to support timely and agile health communication research and practice’ (2022) 19 International Journal of Environmental Research and Public Health 22, Article 15073.

73 I. S. Smythe and J. E. Blumenstock, ‘Geographic microtargeting of social assistance with high-resolution poverty maps’, 1 August 2022, PNAS, www.pnas.org/doi/full/10.1073/pnas.2120025119.

74 Rubinstein, ‘Voter privacy in the age of big data’, p 882. For a discussion of further benefits, see Zuiderveen Borgesius et al., ‘Online political microtargeting’.

75 V. Bashyakarla, ‘Towards a holistic perspective on personal data and the data-driven election paradigm’ (2019) 8 Internet Policy Review 4, Article 1445.

76 See, e.g., Bennett and Lyon, ‘Data-driven elections’, and Bayer, ‘Double harm to voters’.

77 Bennett and Lyon, ‘Data-driven elections’.

78 Footnote Ibid., p. 8.

79 J. Zittrain, ‘Engineering an election’ (2014) 127 Harvard Law Review Forum, 335–42, at 335, 338.

80 Bayer, ‘Double harm to voters’, p. 3.

81 Footnote Ibid., p. 4.

82 DCMS Report, p. 17.

83 Bayer, ‘Double harm to voters’.

84 H. Arendt, The Origins of Totalitarianism (Boston: Harcourt Books, 1951/1968).

85 Bayer, ‘Double harm to voters’, p. 9.

86 E. Shattock, ‘Fake news in Strasbourg: electoral disinformation and freedom of expression in the European Court of Human Rights (ECtHR)’ (2022) 13 European Journal of Law and Technology 1.

87 Abrams v. United States, 250 US 616 (1919).

88 Citron, Hate Crimes in Cyberspace.

89 These include self-regulation, co-regulation, information correction, and the use of criminal sanctions. See, e.g., Meyer and Marsden, Regulating Disinformation with Artificial Intelligence; Helm and Nasu, ‘Regulatory responses to “fake news” and freedom of expression’, pp. 315, 326.

90 Handyside v. UK, Application no 5493/72, Judgment of 7 December 1976.

91 European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions ‘2030 Digital Compass: the European way for the Digital Decade’, EU Doc. COM(2021) 118, 9 March 2021.

92 European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on the European democracy action plan’, EU Doc. COM(2020) 790, 3 December 2020.

93 European Commission, ‘The strengthened Code of Practice on Disinformation 2022’, 16 June 2022, https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation.

95 They include Article 29 Data Protection Working Party, ‘Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679’, 6 February 2018, https://ec.europa.eu/newsroom/article29/items/612053/en; Article 29 Data Protection Working Party, ‘Guidelines on transparency under Regulation 2016/679’, 11 April 2018, www.edpb.europa.eu/system/files/2023-09/wp260rev01_en.pdf; European Data Protection Board (EDPB), ‘Statement 2/2019 on the use of personal data in the course of political campaigns’, 13 March 2019, www.edpb.europa.eu/sites/default/files/files/file1/edpb-2019-03-13-statement-on-elections_en.pdf; European Data Protection Board (EDPB), ‘Guidelines 8/2020 on the targeting of social media users; version 2.0’, 13 April 2021, www.edpb.europa.eu/system/files/2021-04/edpb_guidelines_082020_on_the_targeting_of_social_media_users_en.pdf; European Data Protection Supervisor, ‘Opinion 3/2018: EDPS opinion on online manipulation and personal data’, 19 March 2018, www.edps.europa.eu/sites/default/files/publication/18-03-19_online_manipulation_en.pdf.

96 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.

97 Additional data protection principles include lawfulness and fairness, storage limitation, and accountability.

98 C. Blasi Casagran and M. Vermeulen, ‘Reflections on the murky legal practices of political micro-targeting from a GDPR perspective’ (2021) 11 International Data Privacy Law 4, 348–59, at 349.

99 Footnote Ibid., p. 352. European Data Protection Supervisor, ‘Opinion 3/2018: EDPS opinion on online manipulation and personal data’, 19 March 2018, www.edps.europa.eu/sites/default/files/publication/18-03-19_online_manipulation_en.pdf, 5.

100 European Data Protection Board (EDPB), ‘Statement 2/2019 on the use of personal data in the course of political campaigns’, 13 March 2019, www.edpb.europa.eu/sites/default/files/files/file1/edpb-2019-03-13-statement-on-elections_en.pdf, 2.

101 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services Act).

102 European Commission, ‘The Digital Services Act: ensuring a safe and accountable environment online’, https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en.

103 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on AI and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (AIA). This section draws on B. Schippers, C. Ferreira, and P. Veiga, ‘The European Union Artificial Intelligence Act: challenges for fundamental rights protection’ (2024), https://gdhrnet.eu/wp-content/uploads/EU-Artificial-Intelligence-Act.pdf.

104 For a classification of high-risk AI-systems see Article 6(1) and Annex III. Annex III (8(b)) classifies as high-risk those AI systems ‘intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda’. Exempt from this classification are AI systems ‘whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistical point of view’ (Recital 62).

105 Magrani, Hacking the Electorate, pp. 15–17.

106 Nenadić, ‘Unpacking the “European approach” to tackling challenges of disinformation and political manipulation’.

107 Meyer and Marsden, Regulating Disinformation with Artificial Intelligence.

108 L. McMahon, Z. Kleinman, and C. Subramanian, ‘Facebook and Instagram get rid of fact checkers’, 7 January 2025, BBC Technology, www.bbc.co.uk/news/articles/cly74mpy8klo.

109 L. Graves, ‘Will the EU fight for the truth on Facebook and Instagram?’, 13 January 2024, The Guardian, www.theguardian.com/technology/2025/jan/13/meta-facebook-factchecking-eu.

Accessibility standard: WCAG 2.0 A

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book conforms to version 2.0 of the Web Content Accessibility Guidelines (WCAG), ensuring core accessibility principles are addressed and meets the basic (A) level of WCAG compliance, addressing essential accessibility barriers.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.

Structural and Technical Features

ARIA roles provided
You gain clarity from ARIA (Accessible Rich Internet Applications) roles and attributes, as they help assistive technologies interpret how each part of the content functions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×