15.1 Introduction
Concerns over the harmful impact of online disinformation on the integrity of electoral processes, on democratic political cultures and values, and on human rights arose globally in 2016, when the UK held its referendum to leave the European Union (EU), and Donald Trump was first elected as president of the US.Footnote 1 Almost ten years later, following the experiences of the COVID-19 pandemic, wars in Ukraine and in Gaza, and the return of Trump to the US presidency, these concerns have not receded.Footnote 2 Worries over disinformation, colloquially referred to as ‘fake news’,Footnote 3 centre on its global reach and ease of access, but also on the thorny challenge of regulating online communication, and with it the power of platforms and search engines that facilitate the dissemination of inaccurate and misleading content. This chapter argues that the disinformation problematic is compounded by the proliferation of fake news via the practice of microtargeting, a term that describes the surgical spread of political and other messages to homogeneous social groups, drawing on the analysis of people’s personal data.Footnote 4 The chapter contends that the spread of microtargeted disinformation is of concern to human rights and democracy, as it distorts and fragments the information ecosystem, with harmful consequences for the right to freedom of expression and for democratic discourse. Data analytics techniques, which underpin microtargeting and serve as a vector for the dissemination of fake news, can lead to voter surveillance and interfere with the right to privacy. Elucidating the phenomenon of microtargeted online disinformation (MOD) and its effects on human rights and democracy propels my discussion, which is guided by three questions: What harms to human rights and democracy are produced by MOD? How can human rights law respond to these harms? What are the limits of human rights law?
How to respond to worries over MOD poses complex conceptual, sociological, and legal challenges. Regulatory efforts that seek to curb disinformation rub against legal protections of the right to freedom of expression, which is enshrined, for example, in Article 10 of the European Convention on Human Rights (ECHR) and in Article 11 of the Charter of Fundamental Rights of the EU.Footnote 5 There is also a real need to map and analyse the impact of microtargeted disinformation on a wider range of rights, including the right to privacy, and to unpack its broader implications for Article 10, such as its effect on the rights of minoritised voices. Paradoxically, despite the harmful effects of microtargeted disinformation on human rights, the chapter asserts that human rights law is ill-suited to address the full range of these harms. Considering the processing of personal data in the dissemination of disinformation, the chapter suggests that the protection of human rights has become displaced onto other legal regimes, such as data protection law, and is increasingly reliant on legal instruments with horizontal effect. Acknowledging the extensive regulatory activities within the EU, the chapter analyses a ‘European approach’Footnote 6 to microtargeted disinformation, examining selected recent EU legislative initiatives, the General Data Protection Regulation (GDPR),Footnote 7 the Digital Services Act (DSA),Footnote 8 the Artificial Intelligence Act (AIA),Footnote 9 and the Regulation on Transparency Political Advertising,Footnote 10 and considers their capacity to regulate MOD and to mitigate potential harms to human rights and democracy. There are further reasons for examining the EU’s regulatory initiatives. First, EU legal instruments, such as the GDPR, are said to provide a regulatory ‘gold standard’, which is emulated in non-EU jurisdictions. Second, the extraterritorial dimension of EU law, the much invoked ‘Brussels effect’,Footnote 11 sets regulatory standards beyond the jurisdictional borders of the EU. Third, while EU human rights provisions defer to the norm-setting power of the Council of Europe and the jurisprudence of the European Court of Human Rights (ECtHR), EU secondary law, specifically EU regulations, provide bespoke regulatory tools with horizontal direct effect.
The chapter is structured as follows. Section 15.2 expounds the phenomenon of online disinformation and surveys concerns about the threat of disinformation to human rights and democracy. Drawing on the ECHR, specifically Article 10, and on selected case law of the ECtHR, Section 15.3 discusses the intersection of the right to freedom of expression with online disinformation and analyses the potential consequences for the regulation of disinformation and for a wider suite of Convention rights. Section 15.4 centres on the role and impact of microtargeting in the dissemination of disinformation, while Section 15.5 examines selected regulatory initiatives in the EU, focusing on the GDPR; the DSA, the AIA, and the Regulation on Transparency Political Advertising. Section 15.6 summarises the main points covered in the chapter and identifies areas that require further work.
15.2 From Fake News to (Online) Disinformation: Challenges for Human Rights and Democracy
The rapid spread of online communication and the attendant ease of access to online content, facilitated by the rise of social media platforms such as Facebook, X, or TikTok, and by search engines such as Google, has enhanced the capacity for disseminating and receiving information, and increased the range and scope of civic engagement. These new opportunities for communicating and connecting have been welcomed as a way of informing and empowering people, and helping them to enjoy their human rights, such as the right to assembly and association.Footnote 12 However, online content can also propel the dissemination of inaccurate and frequently misleading content to levels previously unimaginable. The term ‘fake news’, which was popularised during Donald Trump’s first tenure as US president (2017–21) has become shorthand for this type of content. Despite its widespread use, ‘fake news’ lacks an agreed definition and there is no consensus on the range of expressions that it refers to. These can vary from the entertaining barb of political satire to maleficent attempts that seek to damage public trust and confidence in the integrity of electoral processes and elected representatives,Footnote 13 and in public policy. For example, concerns over fake news accompanied the UK Brexit referendum in 2016 and the US presidential elections of 2016 and 2020.Footnote 14 Fake news stories included inaccurate reports about Turkey joining the EU, playing on fears of UK voters over immigration, and erroneous claims that the 2020 US presidential election was ‘stolen’.Footnote 15 There have also been concerns about mistaken and misleading information with respect to the COVID-19 pandemic that has sought to undermine public policy efforts aimed at combating the disease.Footnote 16 Moreover, it should be stressed that faking is not limited to the dissemination of written text, such as tweets or Facebook posts. It extends to the manipulation of voices and images – so-called deepfakes – that can heighten mistrust in political leaders, institutions, and even national security.Footnote 17 Deepfakes gained notoriety when a speech delivered by Nancy Pelosi, the former leader of the Democrats in the United States House of Representatives, was altered to make it sound slurred. This alteration created the inaccurate impression that Pelosi was intoxicated. Such alteration is deceitful, and it can undermine the credibility of democratically elected politicians or of those preparing to stand for public office.Footnote 18 Furthermore, despite the contemporary interest in online disinformation, it is worth noting that fake news is not an invention of the digital age: from Octavian’s fake news info war against Mark Anthony,Footnote 19 to the humorous 1835 hoax of batmen hunting bison on the moon,Footnote 20 and Joseph Goebbels’ Nazi propaganda apparatus, fake news has been part of public political life for more than two millennia. However, developments in the field of digital technologies, propelled by advances in artificial intelligence (AI), have put the propensity to spread fake news on steroids: fake news created by a few can reach millions of users at the push of a button.
How to deal with fake news has emerged as a central issue for policymakers and scholars. One of the key challenges relates to the aptness of the expression ‘fake news’. Despite its widespread use, the term has been discarded as loaded, deployed to discredit political opponents and critical media coverage of politicians and policies. For example, the UK House of Commons Digital, Culture, Media and Sports Committee report Disinformation and ‘fake news’: Final Report,Footnote 21 in one of the most thorough treatments of this topic, rejects the term and suggests instead the adoption of the words ‘misinformation’ and ‘disinformation’. The report defines misinformation as the ‘inadvertent sharing of false information’, while disinformation constitutes ‘the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purpose of causing harm, or for political, personal or financial gain’.Footnote 22 These definitions align with proposals developed by the EU’s High Level Expert Group on Fake News and Online Disinformation. It conceives of disinformation as ‘verifiably false or misleading information … which cumulatively … is created, presented and disseminated for economic gain or to intentionally deceive the public … and may cause public harms [as] threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security’.Footnote 23 A third category of fake news, malinformation, refers to ‘genuine information shared with the intention to cause harm’,Footnote 24 such as defamatory content.
Despite a broad preference for the term ‘disinformation’, there is no consensus regarding its impacts, whether these impacts constitute harm; who or what is being harmed, and how to address such harms. Counselling against an ‘overdose of US perspectives’,Footnote 25 Bodo et al. contend that worries over disinformation amount to a moral panic, which is said to originate in US political culture and in the dominance of US perspectives on disinformation, and which is not applicable beyond the US.Footnote 26 Meyer and Marsden also assert that ‘evidence of large-scale harm is still inconclusive in Europe’.Footnote 27 In particular, there is a dearth of verifiable empirical evidence, which could demonstrate a significant effect of disinformation campaigns on electoral outcomes. However, this lack of empirical evidence does not diminish widespread concerns about the politically dangerous and potentially harmful impact of disinformation on democracy and on the integrity of electoral processes. For example, the EU regards online disinformation practices as ‘public harms’ and ‘threats to our way of life’.Footnote 28 These harms and threats are said to undermine trust and confidence in democracy, in public discourse, and in human rights.Footnote 29 Within public political discourse, concerns about disinformation harms have conjoined with broader worries over online harms, adding force to calls for the regulation of online content. To date, the debates have centred on harms to individuals regarded as vulnerable, specifically individuals with protected characteristics such as children, women, LGBTQ+ people, or people from ethnic minority backgrounds – social demographics who are frequently subjected to misogynistic, homophobic, transphobic, or racist hate speech online, or to online sexual abuse or threats of violence.Footnote 30 An emerging consensus that ‘the online and offline worlds cannot neatly be separated’,Footnote 31 that what is prohibited offline should be prohibited online, underpins the deliberations about the regulation of online communications.
However, the format and precise modalities of regulation remain contested and have emerged as a key challenge. Commenting on broader attempts to come up with a suitable regulatory design, Lillian Edwards asks whether we should ‘regulate by law … refuse to regulate till a clear path can be seen, or …turn to soft law, self-regulation, “co-regulation”, codes of conduct, technical standards, trustmarks, ethical charters, user democracy, who knows?’.Footnote 32 While attempts to regulate the dissemination of illegal content have turned to criminal law,Footnote 33 there are no quick and easy fixes that can offer effective, meaningful, and lawful ways to deal with disinformation. Designing regulatory instruments that can address the threats and harms posed by online disinformation generate seemingly intractable problems, which converge on three aspects: first, the diffuse and opaque nature and extent of disinformation harms and their typically intangible effects on society and on societal values such as human rights and democracy complicate regulatory efforts.Footnote 34 Scholarship on the societal harms of new technologies is only beginning to emerge,Footnote 35 while, as highlighted earlier, the harms caused by disinformation, notwithstanding significant normative concerns, remain empirically unproven. Second, disinformation operates in a novel information landscape, which lacks traditional (editorial) gatekeepers, creates online filter bubbles, and facilitates the spread of online (dis-)information to previously unimaginable levels and across jurisdictional boundaries. National regulatory landscapes have been described as opaque and fragmented, with overlaps and gaps,Footnote 36 while the speed and global reach with which the disinformation ‘infodemic’ infects public discourse limits the effectiveness of national regulation and requires instead collaboration,Footnote 37 and the difficult work of consensus building, at international, or at the very least regional, level.Footnote 38 There is also considerable unease about the role of private companies in the regulatory architecture, for example, whether they should be tasked with the sensitive role of regulating, and possibly censoring, online content. Third, tools such as algorithmic content moderation can be used to remove or block content, but these are blunt instruments that lack contextual understanding, such as the ability to distinguish satire from harmful disinformation or illegal content.Footnote 39 Moreover, different platforms and search engines operate differently and may require bespoke regulatory tools. For example, while X (formerly Twitter) is an open platform, others, such as WhatsApp, offer end-to-end encryption, making them less transparent but no less effective in the spread of disinformation.Footnote 40
These are important considerations for any analysis of disinformation, but this chapter’s main concern, and the focus of Section 15.3, is the linkage between disinformation and human rights that crystallises around the right to freedom of expression. To preview my argument, the ill-considered regulation of online content, including disinformation, may lead to potentially unlawful, unnecessary, and disproportionate interferences with the right to freedom of expression.Footnote 41 Rather than addressing the threats posed by disinformation, such interferences may generate new harms: they may undermine the functioning and values of the democratic processes that critics of unregulated speech worry about. Therefore, online disinformation is not amenable to broad-brush regulation, a factor that adds substantially to the difficulty of responding effectively to its harmful impact.
15.3 Freedom of Expression, Democracy, and Online Disinformation
The challenges posed by the spread of online disinformation are significant, but they should not deter efforts to develop regulatory instruments. As one commentator quipped, ‘the time for simply admiring the problem is over’.Footnote 42 However, as already stated, regulating online content, including disinformation, faces major hurdles. One such hurdle stems from states’ obligations with respect to the right to freedom of expression: these obligations generate a complex terrain for regulatory interventions into disinformation and pose significant barriers to actions that could be construed as interfering in human rights. Drawing on the framework of the ECHR, this section plots how the right to freedom of expression intersects with the spread of online disinformation. The discussion begins with an exposition of the right to freedom of expression in Europe’s regional human rights regime before problematising the legal and conceptual limitations of such a focus. The discussion demonstrates, first, that the right to freedom of expression limits states’ scope to regulate disinformation. Second, it will be argued that reading disinformation harms exclusively through the lens of freedom of expression does not suffice. The section proposes instead a nuanced human rights analysis, which attends to the impact of disinformation on diverse groups and which considers how disinformation impacts a range of other human rights beyond freedom of expression. Third, despite disinformation’s threat to human rights, I suggest that human rights law does not provide sufficient protection from the human rights harms caused by online disinformation.
The right to freedom of expression is enshrined in the system of international human rights law, which emerged in the aftermath of the Second World War. This right imposes negative and positive obligations on states to respect, protect and promote human rights.Footnote 43 Complementing international legal obligations, European human rights provisions for freedom of expression derive from Article 10 of the ECHR and from the jurisprudence of the ECtHR (or Strasbourg Court). Article 10 stipulates that:
1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.
2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.Footnote 44
The protections provided by Article 10 are not limited to speech and cover instead a wide range of expressions. These include artistic and commercial expressions, the publication of photographs, forms of conduct, rules governing clothing, and the use of the ‘Like’ button or similar expressions on social media networks. Article 10 also protects media freedom and grants a limited margin of appreciation with respect to interferences with journalistic expressions.Footnote 45 Moreover, freedom of expression extends to different modes of receiving and imparting information and disseminating one’s right to expression.Footnote 46 However, not all forms of expression are granted equal protection. Two related aspects are noteworthy in this context: first, the ECtHR presumes a hierarchy of expressions, which accords political speech the highest form of protection, followed by artistic and commercial speech.Footnote 47 Second, freedom of expression has special significance within the context of a democratic society. This principle is enshrined in the ECHR Preamble, which confirms a ‘profound belief in those fundamental freedoms which are … best maintained on the one hand by an effective political democracy and on the other by a common understanding and observance of the human rights upon which they depend’.
The importance of the right to freedom of expression within the system of ECHR rights is reflected in the case law of the ECtHR.Footnote 48 The leading case of Handyside v. the UK (1976) established that freedom of expression is ‘[i]ndissociable from democracy’,Footnote 49 and ‘one of the essential foundations of … a [democratic] society, one of the basic conditions for its progress and for the development of every man’. This assertion is stressed in an often-cited section in Handyside, which asserts that freedom of expression
is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society’. … every ‘formality’, ‘condition’, ‘restriction’ or ‘penalty’ imposed in this sphere must be proportionate to the legitimate aim pursued.Footnote 50
Subsequent judgments have reinforced the view that ‘freedom of political debate is at the very core of the concept of a democratic society which prevails throughout the Convention’.Footnote 51 For example, in Lingens v. Austria (1986), the Strasbourg Court proclaims that:
freedom of expression … constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfilment. … it is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb.Footnote 52
That Article 10 ‘enjoys a very wide scope, whether with regard to the substance of the ideas and information expressed, or to the form in which they are conveyed’,Footnote 53 is further emphasised in Mathieu-Mohin and Clerfayt v. Belgium (1987), which reiterates the ECHR Preamble’s link between fundamental human rights and freedoms as ‘best maintained by “an effective political democracy”’,Footnote 54 and which enshrines in particular the ‘prime importance’ of free expression in free elections.Footnote 55 Bowman v. the United Kingdom (1998), which draws on Lingens v. Austria (1986), further emphasises the protection of freedom of expression and of protection of political speech, described as ‘the bedrock of any democratic system’:
Free elections and freedom of expression, particularly freedom of political debate … are inter-related and operate to reinforce each other … freedom of expression is one of the ‘conditions’ necessary to ‘ensure the free expression of the opinion of the people in the choice of the legislature’ … For this reason, it is particularly important in the period preceding an election that opinions and information of all kinds are permitted to circulate freely.Footnote 56
The protection of the right to freedom of expression, its special role in democratic societies, and the extension of this right to different modes of imparting and receiving information has also been confirmed in a series of cases relating to digital communication, specifically the digital dissemination of political speech.Footnote 57 Flipping the claim that what is prohibited offline should be prohibited online, one may read the ECtHR jurisprudence as an assertion of the claim that what should be protected offline should be protected online. In its ‘Guide to Article 10’, the Court confirms the ‘innovative character of the Internet’ and states that ‘user-generated expressive activity on the Internet provides an unprecedented platform for the exercise of freedom of expression’.Footnote 58 The wide scope of Article 10, together with the principles articulated in the ECHR Preamble and in subsequent ECtHR jurisprudence, suggest that online disinformation may not be inherently unlawful and may, in fact, be protected in accordance with Article 10. Therefore, attempts to regulate online disinformation may constitute an interference with the right to freedom of expression. As is well known, interferences into Article 10, for example in the interest of national security, public safety, or other matters, as specified in Article 10(2), must be assessed against the ECtHR’s tripartite test of legality, proportionality, and necessity; they must also consider the additional import bestowed on political speech. These protections impose limits on interferences with the right to freedom of expression, and it is reasonable to surmise that these limits extend to attempts at interference with online disinformation.
Human rights law, specifically Article 10, creates a knotty problem for tackling online disinformation: as argued earlier, there are compelling normative concerns about the harmful impact of disinformation on human rights, yet we may plausibly conclude that disinformation can avail itself of the protections offered by Article 10. Addressing this problem requires a shift in perspective. Two aspects merit particular attention. First, although the regulation of disinformation risks disproportionate and unlawful interference in the right to freedom of expression, unfettered speech, whether offline or online, can also harm the wider communication ecosystem by silencing minoritarian voices.Footnote 59 Judit Bayer argues that ‘paradoxically from the perspective of Article 10 of ECHR, freedom of speech was to be restricted with the objective to preserve a sound informational environment; because pluralism of views, and ultimately the democratic process would otherwise have been distorted by the speech in question’.Footnote 60 Acknowledging these broader effects of disinformation calls for more granular analyses, which study how disinformation impacts the right to freedom of expression for a diverse range of individuals and groups. Online disinformation can generate structural conditions, which silence and exclude marginalised voices, and thus restrict their enjoyment of the right to freedom of expression. Second, conjoining critical analyses of disinformation exclusively with the right to freedom of expression risks losing sight of the effects of disinformation on the wider communication and information ecosystem and on a wider range of human rights, including the right to privacy (Article 8 ECHR), freedom of assembly and association (Article 11), the prohibition of discrimination (Article 14), or the prohibition of an abuse of rights (Article 17). This calls for a broader engagement with human rights law, beyond Article 10, and, as will be discussed in the remainder of the chapter, with legal regimes that offer new or additional protections for human rights.
15.4 Microtargeting: Benefits and Harms to Human Rights and Democracy
What compounds concerns about the impact of online disinformation on human rights and democracy and complicates regulatory efforts is the practice of microtargeting. The term ‘microtargeting’ describes the surgical, selective, and frequently opaque dissemination of tailored political or commercial communication to pre-identified, typically homogeneous audiences.Footnote 61 Through the use of data analytics, microtargeting can generate audience profiles based on social demographics such as gender, age, or ethnicity, but also philosophical beliefs or political opinions. These segmented audiences, for example, of consumers or voters, can be strategically targeted with bespoke messages.Footnote 62 The data typically required for microtargeting is personal data, defined in Article 4(1) of the GDPR as:
any information relating to an identified or identifiable natural person (‘data subject’) … who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.
How such personal data is acquired can vary. It can be provided by the data subject (‘provided data’), such as a person’s political opinions, which he or she may share on social media platforms. It can also be processed through the use of cookies or tracking pixels (‘observed data’). Or it can involve inferred data, which is based on probabilities emerging from the analysis of provided or observed data by finding ‘correlations between datasets and using these to categorise or profile people, e.g., calculating credit scores or predicting future health outcomes’.Footnote 63
Microtargeting has attracted significant public interest since the mid-2010s, when its deployment became associated with the practices of the data analytics company Cambridge Analytica.Footnote 64 Cambridge Analytica created data profiles of typically undecided voters, which aligned with their social media profiles, and which were used to target comparatively small cohorts of voters with selective and often inaccurate messages, especially on highly emotive issues such as immigration. However, it is important to stress that microtargeting is not inherently political in either content or objectives, and that it can be deployed for a range of purposes, including commercial goals.Footnote 65 This is because microtargeting techniques are equally amenable for commercial or political ends, or a combination of both.Footnote 66 Moreover, there is no inherent reason to associate microtargeting with disinformation: microtargeting is not intrinsically wedded to the spread of disinformation, while disinformation can proliferate without resorting to microtargeting practices. However, the conjoining of online disinformation with the practice of microtargeting adds to the significant concerns about harms to elections, to democratic practices and democratic political cultures, and to human rights. This also extends the remit of analysis, beyond a focus on freedom of expression, to include data protection and privacy issues.
Despite extensive scholarly and public political interest in microtargeting and its respective benefits and harms, its effectiveness remains contested. For example, Borgesius et al. remind us that voters do not live in digital bubbles and may not be receptive to microtargeted messages.Footnote 67 This may diminish the capacity of microtargeted messages to influence the outcome of elections. We may also surmise that companies that offer microtargeting services may exaggerate their value.Footnote 68 This lack of empirical evidence on the effectiveness of microtargeting could, potentially, alleviate concerns over its harms. However, suggestions that microtargeting, whether deployed in political or commercial marketing campaigns, may offer benefits to the sender and receiver of targeted messages, indicates a continued confidence in its usefulness.Footnote 69 One of its alleged benefits is said to derive from its shift from ‘broadcasting’, a term that depicts the wide-ranging spread of political or commercial messages to general audiences, to ‘narrowcasting’, which provides receivers, such as voters or consumers, with information on issues that selected audiences regard as relevant, and that speak directly to their interests, needs, or concerns.Footnote 70 For example, in the field of commercial advertising, microtargeting can support the marketing of goods and services to consumers in search of specific products, and reduce advertisement overload for those who are not interested in these products. Microtargeting can also support the transmission of public policy messages in fields such as health or welfare.Footnote 71 For example, microtargeting has been deployed to communicate tailored skin cancer prevention messages to young women using sunbeds,Footnote 72 or to disseminate information about welfare programmes to people living in deprived areas.Footnote 73
Targeting segmented audiences with bespoke messages is also said to offer distinct benefits in the democratic process. For example, microtargeting may engage hard-to-reach individuals and communities, typically those who are ‘switched off’ from the political process. (Re-)engaging sections of the electorate with political issues and electoral processes may offer non-partisan benefits that, rather than undermining democratic politics, may in fact strengthen democratic systems. However, its main advantage is said to lie in the partisan benefits to political campaigns. Although there is no consensus on whether microtargeting can mobilise undecided voters, there is a view that it can ‘activate the base … and improve partisan turnout’.Footnote 74 This explains why microtargeting has been deployed so widely in election campaigns and referenda.
Critical perspectives on microtargeting contend that a focus on empirical evidence about its effectiveness with respect to elections is blind to the substantial threats, including its weaponisation of personal data and its effect on democratic infrastructures.Footnote 75 These analyses ground their arguments in normative claims about the harmful effects of microtargeting that are bound up with three interrelated issues: first, the capacity of microtargeting to disseminate disinformation to selected audiences; second, its effects on the information ecosystem and on the right to freedom of expression; and third, the consequences for privacy and data protection. For example, there are concerns that microtargeted disinformation may suppress voter turnout, and that the use of microtargeting to mobilise small groups of voters, typically in swing states in US elections, can focus on so-called wedge issues. These are polarised issues that can frame and dominate public political discourse and undermine the coherence of the wider polity by detracting attention from issues that concern voters across the party-political spectrum.Footnote 76 These concerns intersect with worries over the impact of microtargeting on privacy, specifically with the harvesting of personal data, and attendant concerns over data protection, data security, and voter surveillance, based on the use of predictive analytics and the processing of inferred data.Footnote 77 There are additional normative concerns that the impact of microtargeting extends beyond individual privacy. For example, Bennett and Lyon contend that it generates collective and societal effects, which are ‘not just about privacy, but even more so about data collection and governance, freedom of expression, disinformation, and democracy itself’.Footnote 78 Zittrain asserts that what he calls ‘digital gerrymandering’ is ‘not a wrong to a given individual user, but rather to everyone, even non-users’.Footnote 79
Such worries about the collective and systemic effects of microtargeting also inform the work of Judit Bayer. Addressing the impact of microtargeting on democracy, she calls for a restriction of microtargeting, not because of its alleged manipulation of voters and interferences with the right to privacy, but because it fragments public discourse and threatens the democratic process. According to Bayer, microtargeting presents a double-harm: it comprises the harm of being targeted, but also the harm of not being targeted, which, according to her, constitutes a violation of informational rights, a potential ‘mass violation of human rights’ of all those who are not targeted.Footnote 80 She declares that the right to receive information and the right to freedom of expression are complementary: ‘[w]hen the right to receive information is violated, it is freedom of expression in its broader sense, which is violated.’Footnote 81 Conjoining the analyses offered by Bennett and Lyon, Zittrain, and Bayer, we may conclude that the profiling and microtargeting of online users, based on the ‘analysis of their data profile’ creates an information asymmetry,Footnote 82 which violates informational rights,Footnote 83 and shatters the ‘shared world’ of political deliberation.Footnote 84 A fragmented and opaque (dis-)information basis of public political discourse, which targets selected social media users with personalised (dis-)information based on data analytics techniques poses harm to the democratic polity, culminating in the ‘polarisation and fragmentation of the public sphere’.Footnote 85 It leaves in its wake citizens who are informationally isolated and atomised.
Despite these normative concerns about MOD’s harms to human rights and democracy, a consensus on how it should be regulated remains elusive. As highlighted in Section 15.3, human rights law creates significant barriers to interferences with freedom of expression; these barriers extend, potentially, to disinformation practices. There is as yet no ECtHR jurisprudence on the issue of disinformation,Footnote 86 and we may conclude that online content, including disinformation, and the mode of delivery – microtargeting – are protected under the broad umbrella of Article 10. Attention has turned instead to legal regimes beyond human rights law. Section 15.5 illustrates this shift in focus with respect to selected EU legal instruments.
15.5 Regulating MOD: The EU’s Legislative Actions
The notion of a ‘marketplace of ideas’ popularised by Justice Holmes’s dissenting judgment in Abrams v. United States implies that the self-regulatory capacity of a public realm, constituted by uncoerced dialogue, is sufficiently inoculated against the effects of harmful speech.Footnote 87 More than a century later, there are well-founded concerns that Holmes’s vision of self-regulation may not suffice to counter the harms generated by unfettered online content. Therefore, despite the affordances of online platforms and search engines to expand opportunities for freedom of expression and other human rights, there are persistent worries that online disinformation compounds harms to individuals and communities and exerts chilling effects on democracy and human rights.Footnote 88 It is against this backdrop that interest in the regulation of online expression and of platforms and search engines has garnered attention.Footnote 89 However, as discussed earlier in the chapter, balancing the right to freedom of expression, which includes the right to ‘offend, shock or disturb’,Footnote 90 with protection from the individual, collective, and societal harms generated by MOD poses seemingly intractable challenges. Regulating online expressions may not provide a sufficiently granular response to the specific threats posed by MOD. In fact, one could plausibly argue that content regulation constitutes a regulatory scattergun, which risks missing the desired aim – that of MOD – because it is blind to diverse types of online communications and their attendant set of challenges.
Recent EU policy and legislative initiatives promise more granular approaches to deal with this problem. They are anchored in EU primary law, including Article 16 of the Treaty on the Functioning of the EU and the Charter of Fundamental Rights (hereinafter the Charter), the EU’s premier human rights instrument. The latter includes the right to privacy (Article 7 of the Charter), the right to protection of personal data (Article 8 of the Charter), and the right to freedom of expression and information (Article 11 of the Charter). The EU’s legislative programme is supported by a series of agenda-setting policy recommendations, Communications, and soft law instruments, which offer additional, albeit ultimately non-enforceable, ways to regulate the dissemination of disinformation via the practices of microtargeting. They include the European Commission’s ‘2030 Digital Compass: the European way for the Digital Decade’,Footnote 91 the European Democracy Action Plan;Footnote 92 the Strengthened Code of Practice on Disinformation,Footnote 93 and the European Declaration on Digital Rights and Principles for the Digital Decade.Footnote 94 Also noteworthy are statements and opinions issued by the Article 29 Data Protection Working Party, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS).Footnote 95 Of key interest is an emerging suite of EU secondary law, which centres on the main elements involved in the dissemination of MOD, such as the regulation of data processing, the regulation of online platforms and search engines, the regulation of AI systems, and the regulation of political advertising. Despite differences in material scope, these laws share three features: first, they have direct horizontal effect, which imposes rights obligations not just on states or an emanation of a state, but also on non-state actors, including private companies. Second, they share loopholes, wide-ranging exemptions, and monitoring and enforcement gaps, which are compounded by the significant regulatory roles accorded to private actors. Third, they manifest an unresolved tension between the EU’s twin objectives of protecting the internal market and promoting innovation on the one hand, and its obligation to protect fundamental rights.
15.5.1 Regulating Data Processing: The GDPR
The use of data analytics in microtargeting underscores the imperative to protect personal data, as enshrined in Article 8 of the Charter, and highlights the importance of data protection laws in the regulation of online content. The GDPR, the EU’s most established and perhaps most widely known legislative instrument, centres on the processing of personal data (Article 4 of the GDPR), while its provisions pursue two goals: first, respect for fundamental rights and freedoms, including the right to protection of personal data; and second, the creation of harmonised rules within the EU internal market (Recital 2).Footnote 96 To deliver on these objectives, the GDPR put forward a series of data protection principles, including transparency, purpose limitation, data minimisation, accuracy, and integrity and confidentiality (Article 5),Footnote 97 which are intended to guide the processing of personal data. As stated in Recital 4, the GDPR ‘respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties’, including the right to freedom of expression (Article 11 of the Charter). Thus, data protection principles, and the right to the protection of personal data, are not absolute but must be balanced against other Charter rights.
The GDPR is ‘technologically neutral’ (Recital 15); it is not designed to regulate the dissemination of disinformation in general or the use of microtargeting techniques in particular. However, its provisions for the processing of personal data provide tools that can be used to regulate the microtargeting of voters via the use of inferred data. Four aspects are particularly noteworthy. First, the GDPR’s extraterritorial scope (Article 3) extends the application of the Regulation beyond the EU and may affect the dissemination of microtargeted online content that originates outside the EU. Second, it connects the right to the protection of personal data with the right to freedom of expression and information, including processing for journalistic purposes (Article 85(1)). Third, it prohibits the processing of ‘special categories of data’, including that of racial or ethnic origin, political opinion, philosophical beliefs, or health data (Article 9(1)). And fourth, it includes the right not to be subject to a decision based solely on automated processing, including profiling (Article 22(1)). Articles 9(1) and 22(1), separately as well as combined, appear to offer robust protection against the harmful effects of data analytics practices, including those that underpin microtargeting. However, each of the two articles is diluted with a range of exemptions. For example, Article 9(2d) provides exemptions for the processing of special category data for associations with political aims or where data subjects have made their personal data public (Article 9(2e)).
Analysing the GDPR’s provisions, Blasi Casagran and Vermeulen have also identified compliance issues with respect to political microtargeting, specifically with the data protection principles as outlined in Article 5 of the GDPR, including lawful processing, purpose limitation, data minimisation, data accuracy, and data accountability.Footnote 98 Their concerns align with those of the EDPB, which highlights the risks of microtargeting not only for ‘individuals’ rights to privacy and data protection, but also to wider trust in the integrity of democratic processes themselves.Footnote 99 The EDPB has counselled that derogations from Article 9 ‘should be interpreted narrowly, as it cannot be used to legitimate inferred data’.Footnote 100 There are additional concerns over the enforcement of GDPR provisions, and tensions between the way that human rights protection is balanced against other interests, including the development of harmonised rules across the EU’s internal market.
15.5.2 Regulating Online Platforms: The DSA
While the GDPR focuses on the processing of personal data, the DSA, which entered into force on 19 November 2022, is the EU’s bespoke tool for dealing with the business models and practices of online service providers.Footnote 101 Like other EU legislative actions, the DSA speaks to the EU’s twin concerns of protecting the internal market via a harmonisation of ‘uniform, effective and proportionate mandatory rules’ (Recital 4) while also protecting fundamental rights enshrined in the EU Charter. Its main objectives are ‘to prevent illegal and harmful activities online and the spread of disinformation’.Footnote 102 Recognising the ‘inherently cross-border nature of the internet’ (Recital 2), the DSA establishes a graduated and tailored approach, which imposes increasingly stringent due diligence obligations on intermediary service providers, depending on their size and reach. The most stringent obligations are imposed on very large online platforms and very large online search engines, which Article 33(1) defines as having a number of average monthly active recipients of the service in the EU equal to or higher than 45 million.
The regulatory guidelines presented in the DSA are technology neutral. Its primary concern lies with a ‘safe, predictable and trusted online environment’ (Recital 9) through a graded, risk-based approach. Three categories of risk are identified: first, the dissemination of illegal content and conduct of illegal activities, such as child sexual abuse or terrorist content; second, the impact of digital services on the exercise of fundamental human rights; and third, the manipulation of platform services which have an impact, among other things, on civic discourse and electoral processes. The DSA requires intermediary service providers to have due regard to relevant international standards for the protection of human rights, including freedom of expression, media freedom, and pluralism (Recital 47). It acknowledges the wider impact of disinformation on fundamental rights (Recitals 9 and 86) and on democratic processes (Recital 82), and it recognises the systemic risks of disinformation and its impact on society and democracy (Recital 104), including the risk of online services to ‘civic discourse and electoral processes, and public security’ (Article 34(1)(c)). More specifically, the DSA acknowledges risks stemming from the use of online advertisements and associated targeting techniques. It stipulates that ‘providers of online platforms should not present advertisements based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679, using special categories of personal data referred to in Article 9(1) of that Regulation, including by using profiling categories based on those special categories’. (Recital 69). Despite its recognition of the harm from disinformation, references to disinformation are relegated to the non-binding recitals. With its focus on illegal activities, the DSA offers few effective interventions into disinformation practices.
15.5.3 Regulating Techniques: The AIA
Broadly mirroring the approach of the DSA, the AIA focuses on the regulation of AI systems.Footnote 103 An AI system is defined as a ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ (Article 3(1)). The Act’s purpose, as outlined in Recital 1, is ‘to improve the functioning of the internal market by laying down a uniform legal framework’ that strikes a balance between fostering the development of a single market for lawful, safe, and trustworthy AI applications while ensuring the protection of fundamental rights and societal values, including democracy and the rule of law. To deliver on this goal, the AIA adopts a risk-based approach, which categorises AI systems into four risk levels, ranging from unacceptable and thus prohibited risks through high risk and limited risk to minimal or no risk. Its legally binding provisions delineate obligations and responsibilities for AI developers and users, seeking to foster an AI landscape that promotes human-centric and trustworthy AI systems and ensures a high level of protection of health, safety, and fundamental rights (Article 1(1)).
Although the Act has not been designed as a bespoke human rights instrument – the AIA refers to fundamental rights – it acknowledges concerns about AI’s impact on rights at various stages of the AI lifecycle, from design and development to the placing of AI systems on the market. Its provisions for human rights protection are part of a broader mission, which seeks to harmonise the legal regulation of AI across the EU, and foster innovation aimed at establishing the EU as a ‘global leader in the development of secure, trustworthy and ethical AI’ (Recital 8). Concerns over AI’s impact on human rights feature prominently in the (non-binding) recitals and in the articles. The Act promotes alignment with EU values of ‘respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights’ (Recital 28). To operationalise these concerns, the Act introduces a series of regulatory governance techniques. These include an obligation to conduct fundamental rights impact assessments for high-risk AI systems,Footnote 104 which apply to AI deployers governed by public law or private entities providing public services (Article 27), transparency obligations (Article 50), and the reporting of serious incidents (Article 73). The Act also stipulates the voluntary application of codes of conduct, including adherence to EU guidelines on ethical AI (Article 95(2)(a)), facilitating inclusive and diverse AI design through inclusive and diverse development teams and stakeholder participation (Article 95(2)(d)), assessing and preventing the negative impact of AI on vulnerable groups and on gender equality (Article 95(2)(e), and establishing an advisory forum to the AI Board with civil society representation (Article 67(2)).
There is a recognition of the potentially maleficent impact of high-risk AI systems on democracy and democratic processes, primarily in the recitals. Recital 110 addresses the capabilities of general-purpose AI models, specifically their capacity to facilitate the spread of disinformation and pose threats to democratic values and human rights. Developing this theme, Recital 120 considers how AI systems deployed by very large online platforms and very large online search engines can disseminate artificially generated or manipulated content, with ‘actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation’ (see also Recital 136). The AIA makes specific reference to deepfakes, defined as ‘AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful’ (Article 3(60)). It imposes a specific obligation on the deployers of deepfakes to disclose that the content has been artificially generated or manipulated if text is published ‘with the purpose of informing the public on matters of public interest’ (Article 50(4)). Overall, though, there is limited engagement with the issue of microtargeted disinformation, and it is fair to conclude that the AIA’s provisions reveal a substantial gap between its normative commitments to protect human rights and the exemptions and regulatory loopholes in the legally binding provisions.
15.5.4 Regulating Political Advertising: The Regulation on the Transparency and Targeting of Political Advertising
Despite the huge public and scholarly interest in the DSA and the AIA, neither of these two instruments offer wide-ranging or bespoke regulatory tools to address the problem of MOD. The new Regulation on the Transparency and Targeting of Political Advertising (TTPA), which entered into force on 13 March 2024 and applies from 10 October 2025, promises to fill this gap by becoming the EU’s most tailored legislative response to the challenge of the microtargeting of political advertising to date. A relatively short regulation – it consists of thirty articles – the TTPA seeks to create synergies with the GDPR and the DSA. For example, the TTPA utilises the GDPR’s provisions for the processing of personal data. Building on the DSA, the TTPA foregrounds the principle of transparency and presents a risk-based approach. Its two objectives are to contribute to ‘the proper functioning of the internal market for political advertising and related services’ (Article 1(4a)) and ‘to protect the fundamental rights and freedoms’ enshrined in the Charter, ‘in particular the right to privacy and the protection of personal data’ (Article 1(4b)). It seeks to achieve these objectives by creating ‘harmonised rules, including transparency and related due diligence obligations, for the provision of political advertising and related services’ (Article 1(1a)) and ‘harmonised rules on the use of targeting techniques and ad-delivery techniques that involve the processing of personal data in the context of the provision of online political advertising’ (Article 1(1b)).
Underpinning the TTPA’s provisions is a recognition that ‘[p]olitical advertising can be a vector of disinformation’ (Recital 4) and that ‘the misuse of personal data through targeting, including microtargeting … may present particular threats to legitimate public interests’ (Recital 6), including threats to fundamental rights, such as freedom of expression, privacy, and equality. Three aspects are significant in this context: first, the TTPA establishes a direct link between microtargeting, disinformation, and political advertising, which is defined as ‘the preparation, placement, promotion, publication or dissemination, by any means, of a message’ (Article 3(1)). Second, the TTPA recognises that the targeting and amplification techniques, which disseminate political advertising, can negatively impact the democratic process and exploit the vulnerabilities of data subjects, with ‘specific and detrimental effects on citizens’ fundamental rights and freedoms with regard to the processing of their personal data and their freedom to receive objective information’ (Recital 74). Targeting or amplification techniques are ‘techniques that are used either to address a tailored political advertisement only to a specific person or group of persons or to increase the circulation, reach or visibility of a political advertisement’ (Article 2(8)). These techniques, which are used to disseminate political advertising, should be prohibited unless explicit consent by the data subject has been given and appropriate safeguards are in place. In this respect, the TTPA seeks to close some of the derogations established in Article 9(2) of the GDPR, but it does not stipulate a prohibition of targeting techniques, provided that explicit consent from the data subject has been provided and that the targeting does not involve profiling (Article 18(1c)). Political parties, foundations, associations, or other non-profit bodies are exempt from these requirements provided that their communications are based on subscription data (Article 18(3)). Article 19 specifies additional transparency requirements, including publications of internal policies, record keeping, and internal annual risk assessments. Third, the TTPA rejects interferences with the substantive content of political messages and seeks to protect the content of political advertising from unlawful interference.
The TTPA is designed to go further than other EU instruments with respect to the practice of microtargeting. As distinct from the DSA, it is not focused on the regulation of illegal online content. Moreover, it expounds the wider impact of targeting and amplification techniques on a broader range of fundamental rights and on democracy itself. However, by exempting the editorial freedom of the media (Recital 29) and ‘the sharing of information through electronic communication services, such as electronic message services … provided that no political advertising service is involved’ (Recital 48), the TTPA’s scope excludes significant avenues for the dissemination of targeted disinformation; for example through platforms such as WhatsApp.Footnote 105 Moreover, its requirement for informed consent with respect to targeting and amplification is ill-suited to the practice of microtargeting, especially if it involves interferences by foreign states or bad actors. This will require further thought and new regulatory tools.
15.6 Conclusion
This chapter has surveyed the phenomenon of MOD and critically assessed whether human rights law is equipped to respond to this complex and complicated challenge. Through a focus on selected EU legislative instruments, and being cognisant of the broader legal-normative order of the ECHR and the jurisprudence of the ECtHR, the discussion has identified considerable hurdles with respect to the regulation of MOD. Given the wide-ranging protections for the right to freedom of expression, the chapter has contended that the scope for human rights law to address the harms generated by MOD is limited. The chapter began by expounding the concept of disinformation before discussing the right to freedom of expression and the limitations it poses on regulating disinformation – a form of communication not normally regarded as unlawful. Building on this discussion, the chapter then proceeded to examine the practice of microtargeting and attendant data analytics and their effects on the right to privacy. A survey and analysis of recent legislative initiatives in the EU illustrated how human rights protection has extended beyond human rights law; for example, by regulating the practices of very large online platforms, or by harnessing the tools of data protection law. While this ‘European approach’ can offer important lessons for the protection of human rights,Footnote 106 the chapter also highlighted important shortcomings related to issues such as enforcement and the wide-ranging scope for derogations and exemptions.
The regulation of MOD remains an evolving area. As emphasised in scholarly work and in policy debates, legal regulation is no panacea to counter the harms to human rights and democracy caused by fake news. The chapter therefore echoes the recommendation of Meyer and Marsden, who call for a holistic approach to disinformation that includes the development of digital literacy across all age groups.Footnote 107 This focus on digital literacy should be aligned with a broader political literacy strategy, which supports the capacity of citizens to access and analyse information and to participate in political discourse, both online and offline. As the recent push against the regulation of tech companies suggests,Footnote 108 there are worrying changes in the international political climate, which propel an anti-regulatory move. Whether EU law can withstand these powerful global actors remains to be seen.Footnote 109