Skip to main content Accessibility help
×
Hostname: page-component-7dd5485656-7jgsp Total loading time: 0 Render date: 2025-10-27T00:13:29.605Z Has data issue: false hasContentIssue false

11 - Disinfodemic Threats. Real, False, and Fake News

A Contribution to Fight Disinformation without Affecting the Freedom of Expression

from Introduction to Part II

Published online by Cambridge University Press:  24 October 2025

Tiina Pajuste
Affiliation:
Tallinn University

Summary

This chapter examines the phenomenon of disinformation in the digital era and its implications for freedom of expression. It explores how the rapid dissemination of false, manipulated, and misleading information – termed a ‘disinfodemic’ – poses threats to human rights, democracy, and public trust. The chapter outlines the historical roots of disinformation, the technological factors that enable it, and the responses by public and private actors to mitigate its harmful effects. The chapter differentiates between disinformation (intentional), misinformation (unintentional), and malinformation (genuine information used to harm), while highlighting their diverse forms, such as fake news, deepfakes, and conspiracy theories. Disinformation erodes public trust, affects electoral integrity, threatens public health, and harms individuals’ rights to information and privacy. The chapter emphasises the necessity of finding a balance between combating disinformation and preserving freedom of expression.

Information

Type
Chapter
Information
Human Rights in the Digital Domain
Core Questions
, pp. 199 - 240
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

11 Disinfodemic Threats. Real, False, and Fake News A Contribution to Fight Disinformation without Affecting the Freedom of Expression

11.1 Introduction

The dissemination of false, incomplete, erroneous, distorted, or intrusive information is a phenomenon as old as humanity, but since the development of information and communication technologies (ICTs), it has re-emerged with a vigour never seen before. The causes and effects have varied considerably to the point of becoming a very dangerous threat not only to human rights but also to the stability of the democratic system.

As the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, explains:

More than 2,000 years ago, Octavian spun a vicious disinformation campaign to destroy his rival Mark Anthony and eventually become the first Roman emperor Augustus Caesar. Since those ancient times, information has been fabricated and manipulated to win wars, advance political ambitions, avenge grievances, hurt the vulnerable and make financial profit.

What is new is that digital technology has enabled pathways for false or manipulated information to be created, disseminated and amplified by various actors for political, ideological or commercial motives at a scale, speed and reach never known before. Interacting with political, social and economic grievances in the real world, disinformation online can have serious consequences for democracy and human rights.Footnote 1

Among the many examples of the use of false information with the purpose of manipulating the population (or certain sectors of it), the alleged impact fake news (selectively or widely disseminated through social media) could have on recent elections is usually cited. More concretely, the 2016 referendum in the UK on whether or not it should continue to be part of the European Union (EU), so-called Brexit, the presidential election in the same year held in the US where Donald Trump was elected, the resulting Facebook/Cambridge Analytica scandal, and the Italian elections in 2018. Even though there is no consensus that the impact of fake news on these processes has been significant, the tight results obtained suggested that it was decisive.Footnote 2

The use of fake news for political and electoral purposes was not the only preoccupying reality. Soon other uses appeared and turned the phenomenon of disinformation into a much more worrying matter, as can be recognised in the context of the COVID-19 pandemic, during which a large amount of false information circulated – even without the objective of manipulating the population – with mournful results.

In this regard, Khan held:

In recent years, in a number of countries, State-led disinformation campaigns have sought to influence elections and other political processes, control the narrative of public debates or curb protests against and criticisms of Governments. In the context of the COVID-19 pandemic, there have been various instances of State actors disseminating unverified claims about the origins of the virus responsible for COVID-19, denying the spread of the disease or providing false information on infection rates, fatality figures and health-care advice. Such disinformation has been detrimental to efforts to control the pandemic, endangering the rights to health and life, as well as people’s trust in public information and State institutions.Footnote 3

These misconducts by high-ranking public officials, deputies, and even presidents were also verified in many cases in Latin America.Footnote 4 This worrying reality has been favoured particularly by the appearance, within the framework of Web 2.0, of social media (websites and computer programs that allow people to communicate and share information on the internet using a computer or a mobile phone), where the use of social networks and of deepfakes emerged as the preferred and most effective way of communicating fake news to people likely to believe them.Footnote 5 But the phenomenon of fake news is not only due to the technological factor but to other epistemological, economic, affective, and political factors.

As Melo points out when considering those factors:

rumors and falsehoods spread in two different ways: social cascades and group polarization […]. A cascade is born when a group of influential people say or do something and others follow in their footsteps. Group polarization occurs when people with intellectual affinities come together, and thus end up defending a more extremist version than the one they held before they began to communicate with each other […].

Cascades are generated because all people tend to depend on others: if most of the people we know believe a rumor, we are also inclined to believe it. In the absence of our own information, we accept the opinions of others […]. In the economy, rumors can give rise to speculative bubbles… in public health they can generate anti-vaccination movements […]. Rumors and falsehoods are often responsible for sowing panic […]. [F]ear spreads quickly from one person to another, and if rumors provoke strong emotions, such as fear or indignation, it is much easier for them to spread.

How to manage the risk that cascades and polarisations lead to people believing false rumours? The most intuitive answer is related to freedom of expression: people should be offered objective information and the necessary corrections from those who know the truth, but this does not always work […]. Many people trust the market of ideas as a way to guarantee the truth.Footnote 6

In this context, owing to the importance of public and private interests at stake, neither states and the international community nor the mass media and other social media actors (websites and applications that enable users to create and share content or to participate in social networking) have remained static, and have gradually come to face the need to set limits on freedom of expression in order to prevent or even to reduce the pernicious effects or spread of false information. As Warren and Brandeis pointed out, when new inventions converged at the end of the nineteenth century (the telegraph, the telephone, photography, and the rotary press) and competition between newspapers became more acute (causing the appearance of ‘yellow journalism’ or ‘infotainment’, as it is called today):

That individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection. Political, social, and economic changes entail the recognition of new rights, and the common Law, in its eternal youth, grows to meet the demands of society.Footnote 7

In the most relevant declarations and constitutions of the late nineteenth century, freedom of expression has been recognised as a preferred freedom, and as a consequence of this, even false or erroneous speech has been protected. However, following the ideas of Warren and Brandeis,Footnote 8 this freedom to inform – originally conceived almost absolutely – began to be subject to restrictions. Mechanisms such as the right of reply or the right to be forgotten in its pre-internet version (conceived not as a right to de-index content but as a right to be compensated for the consequences of publishing old information without current relevance that unjustifiably harms a person) are two clear examples. More recently, the new factual configurations claim new responses from governments, the international community, and the private sector, especially from information society services, who have a special responsibility when providing them. This need was seen much more clearly in the last decade, when false information was disseminated as knowingly false to alter electoral processes or influence in the context of the war between Russia and Ukraine and even in the COVID-19 pandemic scenario. As was clearly stated:

COVID-19 became the perfect storm for platforms. The increasing pressure they had been facing to remove problematic content became a public health issue. Disinformation surrounding the pandemic spread like the virus – at the hands of political leaders and influencers – community social media guidelines were insufficient and inconsistent, and those responsible for enforcing them had to confine themselves to their homes. Amid the confusion, platforms quickly began announcing new rules and measures to deal with misinformation. Regarding community rules, the actions reported during the pandemic have focused more on the rules on the content of the publications, than on rules about inauthentic activities.Footnote 9

The main problem in adopting those concrete and multiple measures in the more recent and complex context to combat disinformation in general and fake news in particular, is how to evaluate the veracity of information and which kinds of measures, adopted both from public and private sectors, could be compatible with democracy and human rights.

The purpose of this chapter is precisely to analyse in general terms the phenomenon of disinformation in the new technological context (which added to its already wide field of action of traditional media the vaster field of the internet and social networks), and more particularly of fake news, which has become even more popular in recent years as a weapon to achieve political or economic gains, leading to the ‘disinfodemic’ and to interesting and relevant conflicts between freedom of expression and various individual and collective rights (e.g., access to information, privacy, data protection).

There are, of course, other serious issues that have been generated and will continue to be generated by the new information technologies that are closely linked to the problem of false news, which we will not discuss here for reasons of space, but which must at least be mentioned: dangerous speech (including hate speech, which can be combated by the same tools that we will describe here); the effects on the rights of a specific person caused by the permanence on the internet of negative information that is true and that, although it was lawfully published at the time, has lost relevance owing to the passage of time (which finds a partial remedy in the right to be forgotten), and the appearance of cancel or call out culture, which condemns those who are deemed to have acted or spoken in an unacceptable manner aimed to ostracise, boycott, or shun (situations for which the right to be forgotten is clearly insufficient and requires other forms of mitigation).Footnote 10

11.2 Real and False News: Information, Disinformation, Misinformation, Malinformation, Propaganda, Fake, and Fabricated News

According to the Cambridge dictionary, information means ‘facts about a situation, person, event, etc.’ (UK), ‘news, facts or acknowledge’ (USA) and in business lexicon ‘facts or details about a person, company, product, etc.’.Footnote 11

In the same dictionary, disinformation is ‘false information spread in order to deceive people’;Footnote 12 misinformation is ‘wrong information, or the fact that people are misinformed’;Footnote 13 fake news is ‘false stories that appear to be news, spread on the Internet or using other media, usually created to influence political views or as a joke’;Footnote 14 and propaganda is defined as ‘information, ideas, opinions, or images, often only giving one part of an argument, that are broadcast, published, or in some other way spread with the intention of influencing people’s opinions’.Footnote 15

Malinformation and real, false, and fabricated news are not defined in the dictionary. Beyond its association with the concept of news, the meaning of ‘information’ is not exempt from debate because some authors think that ‘information encapsulates truth, and hence that false information fails to qualify as information at all’, and explain that the distinction between ‘information as true, and misinformation and disinformation as false, collapses due to the possibility of true misinformation and true disinformation’.Footnote 16

In short, the defined terms are few and, furthermore, these definitions do not have unanimous agreement in the academic field, so we will address them here in order to draw on such other sources.

11.2.1 Disinformation, Misinformation, Malinformation

Within the wide range of modalities that disinformation encompasses, for academic purposes, it is worth distinguishing between the next information disorders:

  1. (a) Disinformation (false information shared intentionally to cause harm, for example, fabricated or deliberately manipulated content; intentionally created conspiracy theories or rumours),

  2. (b) Misinformation (false information shared with no intention of causing harm, caused, for example, by unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken seriously), and

  3. (c) Malinformation (‘genuine information shared with the intention to cause harm’Footnote 17; for example, the publication of private information for personal or corporate interest, such as revenge porn).

About the use of the term ‘disinformation’, in a recent report, Khan explains:

There is no universally accepted definition of disinformation. While the lack of agreement makes a global response challenging, the lack of consensus underlines the complex, intrinsically political and contested nature of the concept […]. Part of the problem lies in the impossibility of drawing clear lines between fact and falsehood and between the absence and presence of intent to cause harm […].

The European Commission has described disinformation as verifiably false or misleading information that, cumulatively, is created, presented and disseminated for economic gain or to intentionally deceive the public and that may cause public harm […] The Broadband Commission for Sustainable Development, on the other hand, has approached disinformation as false or misleading content with potential consequences, irrespective of the underlying intention or behaviours producing and circulating messages. National laws and regulations dealing with disinformation cover a varied combination of false or misleading information, the intention to cause harm or not and the nature of the harm caused or intended. Disinformation is often described in broad, ill-defined terms not in line with international legal standards […].

Academics have developed a taxonomy of an information disorder in which ‘disinformation’ is described as false information that is knowingly shared with the intention to cause harm, ‘misinformation’ as the unintentional dissemination of false information and ‘malinformation’ as genuine information shared with the intention to cause harm […]. By setting out a holistic and interconnected picture of the problem, the information disorder framework encourages a multidimensional, varied and contextualized approach to disinformation […].

Some academics have framed the phenomenon of disinformation as ‘viral deception’ consisting of three vectors: manipulative actors, deceptive behaviour and harmful content. (Camille François, ‘Actors, behaviors, content: a disinformation ABC’ (Transatlantic Working Group, September 2019). The focus is on online behaviour rather than on the veracity of content. Some large social media platforms, including Facebook, refer to these vectors to inform their policies on responding to coordinated inauthentic behaviour […].

Ultimately, the lack of clarity and agreement on what constitutes disinformation, including the frequent and interchangeable use of the term misinformation, reduces the effectiveness of responses. (Submission from the United Nations Educational, Scientific and Cultural Organization. A/HRC/47/25 4) It also leads to approaches that endanger the right to freedom of opinion and expression. It is vital to clarify the concepts of disinformation and misinformation within the framework of international human rights law […].

For the purposes of the present report, disinformation is understood as false information that is disseminated intentionally to cause serious social harm and misinformation as the dissemination of false information unknowingly. The terms are not used interchangeably […].Footnote 18

11.2.2 Real and False News: Fake or Fabricated News

Even at the risk of oversimplification, false news, by exclusion, can be understood as information about something that has happened recently that is totally or partially not true. Obviously, to fully understand the scope of this definition – that obviously presents an information disorder – we must first determine what is meant by real news.

11.2.2.1 Real News

The definition of ‘real news’ has its complexity and can vary based on geographical and personal factors; it involves many other difficulties because:

  1. (a) the notion of truth depends on the subjective interpretation of reality,

  2. (b) to decide whether something is ‘real’ or not, it needs a grounding of truth (e.g., the ‘collective consensus’), and

  3. (c) most definitions take into account the editorial decision-making process followed to determine the credibility and accuracy of the information, considering that the journalistic labour is governed by the principles of verification, independence, and the obligation to report the truth (this is so because false news typically imitates real information in its shape, but not in organisational process or intent).

Real news is created through journalism, and includes ‘hard news’ – serious important news that is considered to be of interest to many people, either in a particular area or country, or in the world;Footnote 19 ‘breaking news’ – information that is received and broadcast about an event that has just happened or just begun;Footnote 20 and ‘soft news’ – news that is a mixture of information and entertainment, often relating to people’s private lives.Footnote 21

In order to distinguish real news from not real, Molina et al. indicate that real news has the following specific differential features:

  1. I. Message and linguistic:

    1. (a) factuality: fact-checked, impartial reporting, uses last names to cite,

    2. (b) evidence: statistical data, research-based,

    3. (c) message quality: journalistic style, edited and proof read,

    4. (d) lexical and syntactic: frequent use of ‘today’, past tense,

    5. (e) topical interest: conflict, human interest, prominence.

  2. II. Sources and intentions:

    1. (a) Sources of the content: verified sources, quotes and/or attributions, heterogeneity of sources,

    2. (b) pedigree: originated from a well-known site/organisation, written by actual news staff,

    3. (c) independence: organisation associated with the journalist.

  3. III. Structural:

    1. (a) URL: reputable ending, URL has normal registration,

    2. (b) About Us section: will have a clear About Us section, authors and editors can be verified, contact Us section, e-mails are from the professional organisation.

  4. IV. Network:

    1. (a) Metadata: Metadata indicators of authenticity.Footnote 22

Complementing this idea, Verma et al. point out that any fake news detection system used to decide whether an article is fake or not should exploit not only the textual features of tweets (e.g., writing style and emotions) but also the characteristics of users (e.g., follower count and verified profile) who propagate fake news, and in this direction, various computational techniques, including long short-term memory, hierarchical attention networks, and natural language process, are utilised to design the fake news detection system with improved accuracy.Footnote 23

11.2.2.2 False Information: False News

Section 11.2.2.1 established that false news should be understood as information that, for different reasons, does not conform to the truth and for that reason constitutes an information disorder that can lead to disinformation, misinformation, or malinformation – even though it is clear that people who create or disseminate the flawed content are not always aware of this and even do not intend to harm. According to Verma, Rohilla, Sharma, and Gupta, the false information that, with a greater or lesser degree of fraudulence, circulates on social media, exhibits various modalities and can be classified according to its contents and to the intentionality of the action, as follows:

  1. (a) satire or parody (one that, although it does not seek to cause damage, has misleading potential),

  2. (b) false connections (titles, images, and quotes are not faithful to the content),

  3. (c) misleading content (information is distorted to create a different reality),

  4. (d) false context (genuine information located in a factual or temporal context different from the real one),

  5. (e) imposter content (the identity of genuine sources is impersonated),

  6. (f) manipulated content (information or images manipulated to deceive), and

  7. (g) fabricated content (totally false content, designed to deceive and cause damage).Footnote 24

A recent paper by D’Amorim and Fernandes de Oliveira Miranda includes the next mis-, dis-, mal- information disorders:

  1. (a) Fake news (information that is completely fabricated for the purpose of either making money or advancing a particular political or social agenda, typically by discrediting others, such as exposed fabrications, hoaxes, and news satire).

  2. (b) Hoaxes (another type of deliberate fabrication or falsification in the mainstream or social media, such as rumours, fake graphics or tables, false attribution of authorship, and dramatic images).

  3. (c) News satire or parody (usually found in humorous news websites based on irony, often in a mainstream format).

  4. (d) Fake reviews (especially used in e-commerce platforms to influence the purchasing of products and services).

  5. (e) Bias (belief bias, confirmation bias, and anchoring).

  6. (f) Propaganda (commonly used as a dangerous persuasive political tool to shape a large-scale opinion, influencing people through Pavlov’s theory of conditioned response that pairs a stimulus with a conditioned response).

  7. (g) Retracted papers (e.g., about 500 papers were backed by the false discovery of the ‘Piltdown man’ a supposed hominid fossil forged about 100 years ago that was supposed to reveal facts about the evolution of man, but was a fraud that took about forty years to be detected).

  8. (h) Conspiracy theories popularised on internet forums and social media (e.g., the COVID-19 anti-vaccine and flat-earthers movements, undermining efforts to end a pandemic and challenging even already consolidated scientific discoveries).

  9. (i) The incorrect use of maps, charts, and graphics (misleading representations with the attempt to support arguments).

  10. (j) Phishing as a malinformation mechanism that misuses personal and/or confidential information. Identity theft, attempts to tarnish a reputation, profile cloning, denying access to email, and financial loss are, for example, results of phishing.

  11. (k) Filter bubbles (algorithm-based, can amplify and at the same time isolate viewpoints and narratives, spreading misinformation by creating a personal ecosystem of information)

  12. (l) Echo chambers (selective exposure of information or messages to users that have a more emotional relationship with information, favouring the circulation of information that reinforces their pre-existing views, maximising ideological polarisation and reinforcing different types of intolerance).

  13. (m) Political use of sensitive information (exaggerations or purposely inflating or deflating numbers to make a point).

  14. (n) Misuse of personal/confidential information (e.g., the Facebook/Cambridge Analytica scandal, an unprecedented data breach involved the harvesting of private information from over 50 million Facebook profiles).Footnote 25

11.2.2.3 Fake News and Propaganda

The broad dissemination of false information is commonly known as false or fake news, but these terms do not have the same meaning around the globe. Even though some authors consider them as synonymous, as said, others view false news as a genre that includes fake news, a concept that was more precisely coined, first to refer to false expressions emanating from high-profile public officials and later to include the expressions of the press and the criticisms disseminated through social networks.

The power of fake news lies in the ability to offer information in accordance with existing convictions of like-minded people, and for this reason the cosmos of social media in general and social networks in particular is a fertile field for its gestation since the business model of this applications leads to the generation of content and information that can be seen as credible and attractive. In this sense:

what makes fabricated news unique is the information environment we currently live in, where social media are key to dissemination of information and we no longer receive information solely from traditional gatekeepers. Nowadays, it is not necessary to be a journalist and work for a publication to create and disseminate content online. Laypersons write, curate, and disseminate information via online media. Studies show that they may even be preferred over traditional professional sources. This is particularly troublesome given that individuals find information that agrees with prior beliefs as more credible and reliable, creating an environment that exacerbates misinformation because credible information appears alongside personal opinions.Footnote 26

We stated before that the fabrication and dissemination of false news for political reasons is an ancient phenomenon, but this phenomenon unexpectedly re-emerged with great vigour in the last decade, and clear evidence of this is that the Oxford dictionary selected ‘post-truth’ as the word of the year in 2016 and the Collins dictionary did the same with the term ‘fake news’ in 2017.

As recently stated:

the concept known as ‘disinformation’ during the World Wars and as ‘freak journalism’ or ‘yellow journalism’ during the Spanish war, can be traced back to 1896 (Campbell, 2001; Crain, 2017). Yellow journalism was also known for publishing content with no evidence and therefore factually incorrect, often for business purposes (Samuel, 2016). In Yarros’ (1922) critique of yellow journalism is characterized as ‘brazen and vicious “faking,” and reckless disregard of decency, proportion and taste for the sake of increased profits.’ As if history were repeating itself, the phenomenon regained attention during the 2016 U.S. Presidential elections.Footnote 27

The very recent resurgence of fake news has brought new uses and a conceptual constraint (most exclude the publication of false stories used as a joke from this term). As Wardle points out, the term has been recently used by politicians (e.g., the former president of the US, Donald Trump), ‘as a weapon to attack a free and independent press’.Footnote 28 In accordance with this, Barclay restrictedly defines fake news as ‘information that is completely fabricated for the purpose of either making money or advancing a particular political or social agenda, typically by discrediting others’.Footnote 29

As Botero Marino points out, if we understand fake news as the publication or massive dissemination of false information of public interest, knowing its falsehood and with the intention of deceiving or confusing the public or a fraction of it, the concept is based on three elements: a material element (the massive dissemination of false information), a cognitive element (the effective knowledge of the falsity of the information that is manufactured and/or divulged), and a volitional element (the intention to deceive or confuse the public or a fraction of it). The volitional element is particularly useful in order to distinguish fake news from satire. Satire can consist of the publication of false information, knowing that it is false, without – or with – the intention of misleading or confusing the public. Satire, in accordance with the jurisprudence of the European Court of Human Rights (Vereinigung Bildender Künstler v. Austria),Footnote 30 enjoys special and reinforced protection from the right to freedom of expression.Footnote 31

As a consequence of this new and restricted concept, it is more obvious that the dissemination of fake news is a core component of propaganda, as both constitute forms of distorting reality in order to influence the opinions and actions of a given audience and obtain beneficial results.Footnote 32

Molina et al. refer to the different possible contents included in the concept of fake news:Footnote 33

  1. (a) false news/hoaxes,Footnote 34

  2. (b) commentary, opinion and feature writing,Footnote 35

  3. (c) misreporting,Footnote 36

  4. (d) polarised and sensationalist content,Footnote 37

  5. (e) satirical news,Footnote 38

  6. (f) persuasive information,Footnote 39 and

  7. (g) citizen journalism.Footnote 40

In addition, they conclude that identifying fabricated information online before it becomes viral is imperative for maintaining an informed citizenry able to make decisions required in a healthy democracy, so that using machine learning and including several types of indicators could provide a solution for the identification of fabricated information, despite its overlapping boundaries with other types of information.Footnote 41 With this in mind, they propose an original taxonomy of online content, as a precursor to identifying signature features of fabricated news, including a table in which seven indicators (fact checked, emotionally charged, source verification, registration inconsistency, site pedigree, narrative writing, and humour) converted into yes/no questions (for a decision-tree algorithm) are provided to distinguish between fake news and other types of content, precisely in order to develop algorithms for fabricated news detection, and not to impinge on users’ right to express their opinions, or journalists’ endeavours, but to stop the dissemination of false information.Footnote 42

11.2.2.4 Ten Keys Suggested by the Private Sector and Academia to Detect and Stop the Spread of Fake News

In a recent study where it was detected that 86 per cent of Spaniards have difficulty distinguishing between real and false information, the researchers provided the following ten tips for detecting when information is false:

  1. (a) Be wary of headlines. Fake news often has eye-catching headlines in all caps with exclamation marks and often shocking and unheard of information.

  2. (b) Always examine the URL. A fake web address or one that copies a real one can indicate fake news. Check the characters of the URL well because there are always small details.

  3. (c) Investigate the source of the news, particularly before sharing or disseminating it. Some social networks such as Facebook (until 2025) or Google have enabled the Fact Checking button so that users can certify the veracity of the information.

  4. (d) Pay attention to the format. Many fake news sites have misspellings or weird layouts.

  5. (e) Take a close look at the photos and do an image search. Fake news often contains manipulated images or videos, even based on authentic photos taken out of context.

  6. (f) Check the dates. Fake news can have a nonsensical timeline or include altered dates.

  7. (g) Check the facts and the author’s sources to confirm that they are accurate. If the identity of supposed experts is not mentioned, it is possible that the news is false.

  8. (h) Check other news. If no other news source is reporting the same story, it may be false.

  9. (i) Consider that the story may be a joke, especially when the source of the news is known for its parodies. If the details and the tone suggest that it has been written in a humorous way, it is not fake news.

  10. (j) Be critical. Some stories are false on purpose and for hidden or malicious purposes.Footnote 43

11.2.3 The Four Top-Level Disinformation Responses

The crisis generated by the disinfodemic phenomenon has led both public and private sectors (particularly the international community, states, digital service providers, and non-governmental organisations) to adopt measures to mitigate it.

In particular, state responses seek to punish administratively or criminally the authors or broadcasters, following the recommendations of the UN Special Rapporteur for Freedom of Opinion and Expression, who recently reported that the responses by states to the growing phenomenon of disinformation refer fundamentally to internet closures and different kinds of regulations: criminal laws on defamation, consumer protection, financial fraud, social media, and social networks, and recommends that, ‘[i]n consonance with their obligation to respect human rights, States should not make, sponsor, encourage or disseminate claims that they know or should reasonably know to be false […]’.Footnote 44

In the case of private stakeholders, they mainly responded by adopting self-regulation (rushed by the pressure of governments on large internet platforms, perceived as ‘facilitators’ of the phenomenon) deployed via:

  1. (a) social media (in the USA, the Washington Post’s FactChecker; in France, Libération’s Desintox and Le Monde’s Les Décodeurs; in the UK, the Channel 4 News’ Fact Check and The Guardian’s Reality Check Blog, among others),

  2. (b) social networks (e.g., various tools used by Facebook to facilitate detection and reporting: a search initiative, a news literacy campaign, and a monitoring of the work done in News Feed),

  3. (c) other internet service providers (e.g., Google’s Fast Check and some browser’s extensions, such as Pinocchio alerts, FiB Stop Living a Lie, This is Fake, B.S. Detector, and Fake News Alert), and

  4. (d) non-governmental organisations (e.g., Fast Check CL in Chile, Newtral in Spain, FactCheckEU.org in Europe, FactCheck.org and Potitifact.com in the USA, Africa Check, in Africa, Chequeado.com in Argentina and AltNews in India; they check the quality of high social impact content and warn about any possible falsehood). In this direction, the Pointer Institute for Media Studies has been developing, with several civil and media organisations from around the world, the International Fact-Checking Network, and recently approved a code of principles on fact checking.

We will focus infra on these efforts, following the ‘four top-level disinformation responses’ classified by the Broadband Commission in the recent report on Freedom of Expression and Addressing Disinformation on the Internet.Footnote 45

11.2.3.1 Identification Responses

These kinds of responses involve the monitoring and analysis of information channels in order to detect the presence of disinformation through two kinds of activities (called subtypes in the report):

  1. (a) Monitoring and fact-checking: carried out by internet communications companies, academia, and news, civil society, and independent fact-checking organisations and their partnerships.Footnote 46

  2. (b) Investigative responses, aimed at determining whether a given message or content is totally or partially false and to provide insights into disinformation campaigns, including the originating actors, degree of spread, and affected communities.Footnote 47

11.2.3.2 Law and Policy Responses Aimed at Producers and Distributors

These kinds of responses are addressed to alter the environment that governs and shapes the behaviour of producers and distributors of disinformation, specifically:

  1. (a) Legislative, pre-legislative, and policy responses, encompassing regulatory intervention.Footnote 48

  2. (b) National and international counter-disinformation campaigns, tending to focus on the construction of counter-narratives.Footnote 49

  3. (c) Specific responses aimed at combating election-related disinformation and designed to detect, track, and counter disinformation that is spread during elections, owing to its impact on democratic processes and citizen rights. This involves a combination of monitoring and fact-checking, legal, curatorial, technical, and other responses, which will be cross-referenced as appropriate.Footnote 50

11.2.3.3 Responses within the Processes of Production and Distribution

These include:

  1. (a) Curatorial responses, primarily editorial and content policy and ‘community standards.’Footnote 51

  2. (b) Technical and algorithmic responses implemented by social media platforms, video-sharing platforms and search engines, but can also be third-party tools (e.g., browser plug-ins) or experimental methods from academic research using algorithms and/or artificial intelligence to detect and limit the spread of disinformation, or provide context or additional information on individual items and posts.Footnote 52

  3. c) De-monetisation responses, designed to stop profit and thus discourage the creation of clickbait, counterfeit news sites, and other kinds of for-profit disinformation.Footnote 53

11.2.3.4 Responses Supporting the Target Audiences (Victims) of Disinformation Campaigns

These responses include guidelines, recommendations, resolutions, media and data literacy, content credibility labelling initiatives, and other tools to influence curation in terms of the prominence and amplification of certain content. They are sub-classified in the report into:

  1. (a) Ethical and normative responses carried out on international, regional and local levels involving the public condemnation of acts of disinformation or recommendations and resolutions aimed at thwarting these acts and sensitising the public to these issues.Footnote 54

  2. (b) Educational responses aimed at promoting media and information literacy, critical thinking, and verification in the context of online information consumption, as well as journalist training.Footnote 55

  3. (c) Empowerment and credibility labelling efforts around building content verification tools and web content indicators, in order to empower citizens and journalists to avoid falling prey to online disinformation.Footnote 56

11.2.4 International and Inter-American Human Rights System Responses

The worrying current reality of the fake news phenomenon has led the international community to propose the adoption of measures and effective tools to combat fake news, but making clear both the scope of freedom of expression and the limits that measures aimed at combating disinformation must respect.

In this direction the UN Special Rapporteur for Freedom of Opinion and Expression, the Representative for Freedom of the Media of the Organization for Security and Cooperation in Europe (OSCE), the Organization of American States (OAS) Special Rapporteur for Freedom of Expression, and the Special Rapporteur on Freedom of Expression and Access to Information of the African Commission on Human and Peoples’ Rights adopted several documents that, with greater or lesser specificity, refer to fake news and disinformation issues: in 2017, the Joint Declaration on Freedom of Expression and Fake News, Disinformation and Propaganda; in 2018, the Joint Declaration on Media Independence and Diversity in the Digital Age; in 2019, the Twentieth Anniversary of the Joint Declaration: Challenges to Freedom of Expression in the Next Decade; in 2020, the Joint Declaration on Freedom of Expression and Elections in the Digital Age; in 2021, the Joint Declaration on Politicians and Public Officials and Freedom of Expression, and in 2022 the Joint Declaration on Freedom of Expression and Gender Justice.Footnote 57 Also in 2022, and clearly facing the consequences of fake news during the Russia and Ukraine war, the UN Special Rapporteur for Freedom of Opinion and Expression also published a report on Disinformation and Freedom of Opinion and Expression during Armed Conflicts.Footnote 58

11.2.4.1 The ‘Standards on Disinformation and Propaganda’ Settled in the Joint Declaration on Freedom of Expression and Fake News, Disinformation and Propaganda

This document states that general prohibitions on the dissemination of information based on vague and ambiguous ideas, including ‘false news’ or ‘non-objective information’, are incompatible with international standards for restrictions on freedom of expression; criminal defamation laws and civil liability are only legal if defendants cannot prove the truth and do not have other defences, such as fair comment.Footnote 59

It also states that state actors should not make, sponsor, encourage, or further disseminate statements that they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda), and should take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security, and the environment.

The principles settled in that document are as follows:

  1. (a) States may only impose restrictions on the right to freedom of expression in accordance with the test for such restrictions under international law, namely that they be provided for by law, serve one of the legitimate interests recognised under international law, and be necessary and proportionate to protect that interest.

  2. (b) Restrictions on freedom of expression may also be imposed, as long as they are consistent with the requirements noted in paragraph 1(a), to prohibit advocacy of hatred on protected grounds that constitutes incitement to violence, discrimination, or hostility (Article 20(2), International Covenant on Civil and Political Rights).

  3. (c) Intermediaries should never be liable for any third-party content relating to those services unless they specifically intervene in that content or refuse to obey an order adopted in accordance with due process guarantees by an independent, impartial, authoritative oversight body (such as a court) to remove it and they have the technical capacity to do that.

  4. (d) Consideration should be given to protecting individuals against liability for merely redistributing or promoting, through intermediaries, content of which they are not the author and which they have not modified.

  5. (e) State mandated blocking of entire websites, IP addresses, ports, or network protocols is an extreme measure that can only be justified where it is provided by law and is necessary to protect a human right or other legitimate public interest, including in the sense that it is proportionate, there are no less intrusive alternative measures that would protect the interest, and it respects minimum due process guarantees.

  6. (f) Content filtering systems that are imposed by a government and are not end-user controlled are not justifiable as a restriction on freedom of expression.

11.2.4.2 The Joint Declaration on Media Independence and Diversity in the Digital Age

This document states that restrictions on what may be disseminated through the media must be ruled by law, serve one of the legitimate interests recognised under international law, and be necessary and proportionate to protect that interest.Footnote 60 It rejects the use of vague notions, such as ‘information security’ and ‘cultural security’, as a basis for restricting freedom of expression.

It also states that media outlets and online platforms should enhance their professionalism and social responsibility, including potentially by adopting codes of conduct and fact-checking systems, and putting in place self-regulatory systems or participating in existing systems, to enforce them.

11.2.4.3 The Twentieth Anniversary Joint Declaration: Challenges to Freedom of Expression in the Next Decade

This document, considering private control as a threat to freedom of expression, states that there is an urgent need to adopt measures that address the ways in which the advertising-dependent business models of some digital technology companies create an environment that can also be used for viral dissemination of inter alia deception, disinformation, and hateful expression.Footnote 61 It also urges human rights sensitive solutions to the challenges caused by disinformation, including the growing possibility of deep fakes, in publicly accountable and targeted ways, using approaches that meet the international law standards of legality, legitimacy of objective, and necessity and proportionality.

11.2.4.4 The Joint Declaration on Freedom of Expression and Elections in the Digital Age

This document states that Member States should ensure that any restrictions on freedom of expression that apply during election periods comply with the international law three-part test requirements of legality, legitimacy of aim, and necessity, which implies the following:Footnote 62

  1. (a) There should be no prior censorship of the media, including through means such as the administrative blocking of media websites or internet shutdowns.

  2. (b) Any limits on the right to disseminate electoral statements should conform to international standards, including that public figures should be required to tolerate a higher degree of criticism and scrutiny than ordinary citizens.

  3. (c) There should be no general or ambiguous laws on disinformation, such as prohibitions on spreading falsehoods or non-objective information. It also states that the media, both legacy and digital, should be exempted from liability during election periods for disseminating statements made directly by parties or candidates unless the statements have specifically been held to be unlawful by an independent and impartial court or regulatory body, or the statements constitute incitement to violence and the media outlet had a genuine opportunity to prevent their dissemination.

About restrictions during elections, it recommends Member States to consider supporting positive measures to address online disinformation, such as the promotion of independent fact-checking mechanisms and public education campaigns, while avoiding adopting rules criminalising disinformation. It also states that online intermediaries should not be held liable for dis-, mis-, and malinformation that has been disseminated over their platforms unless they specifically intervene in that content or fail to implement a legally binding order to remove that content.

It also states that digital media and online intermediaries should make a reasonable effort to address dis-, mis-, and malinformation and election related spam, including through independent fact-checking and other measures, such as advertisement archives, appropriate content moderation, and public alerts.

11.2.4.5 The Joint Declaration on Politicians and Public Officials and Freedom of Expression

This document recommends Member States to repeal or refrain from adopting general prohibitions on the dissemination of inaccurate information, such as false news or fake news laws, and also to respect the following standards in relation to disinformation and false news:Footnote 63

  1. (a) adopt policies that provide for disciplinary measures to be imposed on public officials who, when acting or perceived to be acting in an official capacity, make, sponsor, encourage, or further disseminate statements that they know or should reasonably know to be false;

  2. (b) ensure that public authorities make every effort to disseminate accurate and reliable information, including about their activities and matters of public interest.

It also states that, given the harm done by hate speech, including the ability of its targets to exercise their right to freedom of expression and to participate in political activities, Member States should:

  1. (a) prohibit by law any advocacy of hatred that constitutes incitement to discrimination, hostility, or violence, in accordance with international law;

  2. (b) undertake a range of activities – including education and counter-messaging – to combat intolerance and promote social inclusion and intercultural understanding.

In this regard, several international organisations have begun to focus their attention on public servants’ obligations to make truthful statements, highlighting those made by public officials and candidates for elected positions.

Knowingly or recklessly disregarding a falsehood about the truth finding is the basis of the doctrine of ‘actual malice’ enshrined by the US Supreme Court in the case New York Times v. Sullivan for the freedom of expression and the press.Footnote 64 This criterion has been taken up by the Inter-American human rights system, where due diligence is required for – and not only for – journalistic activity,Footnote 65 since officials must affirm true facts, and before making a judgement they must act with verification criteria superior to that of any person, and failure to do so in certain circumstances can generate consequences ranging from criminal to relevant ethical. Therefore, within the framework of Article 13 of the American Convention on Human Rights, the establishment of specific responsibilities and obligations to tell the truth or not lie, intentionally or negligently, for certain persons owing to their functions does not necessarily constitute an illegitimate restriction on the right to freedom of expression, and therefore false expression knowing its falsehood – standard of real malice – excludes the protection that a priori false expression enjoys.

In this direction, in the jurisprudence of the Inter-American Court, officials are charged – owing to their function and the position in society – with the duty to verify the facts (higher than in the case of an ordinary person) and to satisfy both the publicity of government acts and the access to public information.

As stated by the Inter-American Court in two resounding cases ruled against Venezuela,Footnote 66 the exercise of freedom of expression is not the same for a mere private subject as it is for public officials, since in a democratic society it is not just legitimate; rather, it is sometimes the duty of state authorities to rule on matters of public interest. However, in doing so they are subject to certain limitations insofar as they must reasonably, although not necessarily exhaustively, verify the facts on which they base their opinions, and they should do so with even greater diligence than that employed by individuals, by reason of its high investiture, the wide scope, and eventual effects that its expression may have on certain sectors of the population, and to prevent citizens and other interested persons from receiving a manipulated version of certain events.

The Court adds that the limitations on the duty to express oneself are clearly based on the damage that false expressions can cause and given that public officials perform the role of guarantor of the fundamental rights of the people, their statements cannot ignore them or constitute forms of direct or indirect interference or harmful pressure on the rights of those who seek to contribute to public deliberation through the expression and dissemination of their thoughts. This duty of special care is particularly accentuated in situations of greater social conflict, disturbances of public order, or social or political polarisation, precisely because of the set of risks that may be implied for certain people or groups at any given time.Footnote 67

More recently, in the context of the COVID-19 pandemic, the Inter-American Court adopted a resolution in which it refers to the obligations of Member States, recommending they:

Observe special care in the pronouncements and statements of public officials with high responsibilities regarding the evolution of the pandemic. In the current circumstances, it is a duty for state authorities to inform the population, and when pronouncing in this regard, they must act diligently and have a reasonable scientific basis. Also, they must remember that they are exposed to greater scrutiny and public criticism, even in special periods. Governments and Internet companies must deal with and transparently combat the disinformation that circulates regarding the pandemic.Footnote 68

In projection of these premises, provisions of a criminal and administrative nature in Latin America can be found in internal laws, such as codes and laws of ethics for public functions and even health regulations, from which the obligations of public officials to tell the truth arise.

Consequently, in criminal matters, in the protection of public faith, mendacious actions by public officials are punished for their crimes in the ideological falsification of public instruments,Footnote 69 and in the field of administrative law damage to trust in the administered in the face of acts of power (‘legitimate confidence’), and when this is violated, the administrative act is nullified.Footnote 70 Among the codes and laws of ethics in the public function, honesty and integrity stand out as express demands in the work of public officials.Footnote 71 These rules have also been incorporated into both regional and global soft law norms.Footnote 72

There are also specific regulations aimed at health officials, especially regarding the obligation not to produce false or misleading messages, which mainly refer to advertising pharmaceuticals and public health issues.Footnote 73

Legal obligations are also established for candidates for elected public office, particularly in the prevention of ‘dirty’ campaigns (those in which offences are used, information is invented, and slanders intrude on the private life of a candidate), which have been denounced recently in Latin America,Footnote 74 and have given rise to regulation, especially in electoral laws and political parties.Footnote 75

In this sense, the Broadband Commission for Sustainable Development, co-founded by UNESCO and the International Telecommunication Union (ITU), recommends in a recent report that:

Political parties and other political actors could: 1) Speak out about the dangers of political actors as sources and amplifiers of disinformation and work to improve the quality of the information ecosystem and increase trust in democratic institutions. 2) Refrain from using disinformation tactics in political campaigning, including the use of covert tools of public opinion manipulation and ‘dark propaganda’ public relations firms.Footnote 76

In any case, when weighing the rights involved and when giving concrete solutions, it appears advisable to consider what Sunstein has said:

In life and in politics, truth matters. In the end, it might matter more than anything else. It is a precondition for trust and hence for cooperation. But what, exactly, can governments do to restrict the dissemination of falsehoods in systems committed to freedom of speech? In brief: Much less than some of them want, but much more than some of them are now doing. I have argued in favor of a general principle: False statements are constitutionally protected unless the government can show that they threaten to cause serious harm that cannot be avoided through a more speech-protective route. I have also suggested that when lies are involved, the government may impose regulation on the basis of a weaker demonstration of harm than is ordinarily required for unintentional falsehoods. Reasonable people can disagree about how to apply these ideas in concrete cases. In general, however, this general principle, and the accompanying suggestion, give a great deal of constitutional protection to falsehoods and even lies.

[…] public officials have considerable power to regulate deepfakes and doctored videos. They are also entitled to act to protect public health and safety, certainly in the context of lies, and if falsehoods create sufficiently serious risks, to control such falsehoods as well. In all of these contexts, some of the most promising tools do not involve censorship or punishment; they involve more speech-protective approaches, such as labels and warnings.Footnote 77

11.2.4.6 The Joint Declaration on Freedom of Expression and Gender Justice

This document promotes the adoption of ‘education programs, social policies, cultural practices, and laws and policies that prohibit discrimination and sexual and gender-based violence and to promote equality and inclusion’, considering specifically that ‘online gender-based violence, gendered hate speech and disinformation, which cause serious psychological harm and can lead to physical violence, are proliferating with the aim of intimidating and silencing women, including female politicians, journalists and human rights defenders’.Footnote 78

11.2.4.7 The UN Special Rapporteur for Freedom of Opinion and Expression Report on Disinformation and Freedom of Opinion and Expression during Armed Conflicts

In this report, Khan examines the challenges that information manipulation poses to freedom of opinion and expression during armed conflict. She notes that the information environment in the digital age has become a dangerous theatre of war in which state and non-state actors, enabled by digital technology and social media, weaponise information to sow confusion, feed hate, incite violence, and prolong conflict.Footnote 79

Emphasising the vital importance of the right to information as a ‘survival right’ on which people’s lives, health, and safety depend, she recommends that human rights standards be reinforced alongside international humanitarian law during armed conflicts; urges states to reaffirm their commitment to upholding freedom of opinion and expression and ensuring that action to counter disinformation, propaganda, and incitement is well grounded in human rights; she recommends that social media companies align their policies and practices with human rights standards and apply them consistently across the world, and concludes by reiterating the need to build social resilience against disinformation and promote multi-stakeholder approaches that engage civil society as well as states, companies, and international organisations.

11.2.4.8 The European Commission Communication on ‘Tackling Online Disinformation: A European Approach’ and the Report on the Implementation of This Communication

In March 2015, the European Council invited the High Representative to develop an action plan to address Russia’s ongoing disinformation campaigns, which resulted in the creation of the East Stratcom Task Force, effective as planned since September 2015.

In a June 2017 resolution, the European Parliament called upon the Commission ‘to analyse in depth the current situation and legal framework with regard to fake news and to verify the possibility of legislative intervention to limit the dissemination and spreading of fake content’.Footnote 80

In March 2018, the European Council stated: ‘social networks and digital platforms need to guarantee transparent practices and full protection of citizens’ privacy and personal data’,Footnote 81 and subsequently the European Commission adopted the Communication ‘Tackling online disinformation: a European Approach’,Footnote 82 focused on combating the effects of online disinformation through: (a) a more transparent, trustworthy, and accountable online ecosystem, (b) secure and resilient election processes, (c) fostering education and media literacy, (d) support for quality journalism as an essential element of a democratic society, and (e) countering internal and external disinformation threats through strategic communication.

Later in the same year, the European Commission adopted a Report on the implementation of that Communication,Footnote 83 which delineates the challenges online disinformation present to our democracies and outlines five clusters of actions for private and public stakeholders that respond to these challenges.

The four measures that the Communication on tackling online disinformation suggested are analysed here.

11.2.4.8.1 A More Transparent, Trustworthy, and Accountable Online Ecosystem.

The first set of actions involves four objectives: (a) online platforms to act swiftly and effectively to protect users from disinformation, (b) strengthening fact checking, collective knowledge, and monitoring capacity on disinformation, (c) fostering online accountability, and (d) harnessing new technologies.

11.2.4.8.1.1 Online Platforms to Act Swiftly and Effectively to Protect Users from Disinformation.

This objective, achieved through:

  1. (a) the adoption of a code of practice to be used by online platforms and the advertising industry in order to increase transparency,

  2. (b) the creation of an independent European network of fact-checkers to establish common working methods, exchange best practices, and achieve the broadest possible coverage across the EU,

  3. (c) the promotion of voluntary online identification systems for the traceability and identification of suppliers of information, and

  4. (d) the use of the EU research and innovation programme (Horizon 2020) to mobilise new technologies, such as artificial intelligence, block chain, and cognitive algorithms.

The commitments are organised under five fields, monitored by the Commission in the implementation of a self-regulatory Code of Practice on Disinformation drafted in 2018 in a Multistakeholder Forum on Disinformation:

  1. (a) Scrutiny of ad placements (deploy policies and processes to disrupt advertising and monetisation incentives for relevant behaviours).

  2. (b) Political advertising and issue-based advertising (all advertisements should be clearly distinguishable from editorial content; enable public disclosure of political advertising; devise approaches to publicly disclose ‘issue-based advertising’).

  3. (c) Integrity of services (put in place clear policies regarding identity and the misuse of automated bots on their services; put in place policies on what constitutes impermissible use of automated systems).

  4. (d) Empowering consumers (invest in products, technologies, and programmes to help people make informed decisions when they encounter online news that may be false; invest in technological means to prioritise relevant, authentic and authoritative information where appropriate in searches, feeds, or other automatically ranked distribution channels; invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest; partner with civil society, governments, educational institutions, and other stakeholders to support efforts aimed at improving critical thinking and digital media literacy; encourage market uptake of tools that help consumers understand why they are seeing particular advertisements).

  5. (e) Empowering the research community (support good faith independent efforts to track disinformation and understand its impact; not prohibit or discourage good faith research into disinformation and political advertising on platforms; encourage research into disinformation and political advertising).Footnote 84

11.2.4.8.1.2 Strengthening Fact Checking, Collective Knowledge, and Monitoring Capacity on Disinformation.

The Commission committed, as a first step, to supporting the creation of an independent European network of fact-checkers.

As a second step, it committed to launching a secure European online platform on disinformation, offering analytical tools and cross-border data collection, including Union-wide open data and online platforms usage data to support detection and analysis of disinformation sources and dissemination patterns.

The Commission organised a series of technical workshops with representatives of the fact-checking community in 2018. It selected relevant projects under the research and innovation programme Horizon 2020. Furthermore, in cooperation with the European Parliament, it organised a fact-checking conference in view of the European elections.

These actions have contributed to: (a) mapping and networking independent fact-checking organisations in the Member States, (b) ascertaining which tools and services are essential and facilitating the improvement of fact-checking activities and their impact (e.g., access to EUROSTAT data, translation tools, automated stream of fact-checks produced by the relevant fact-checking organisations), (c) identifying professional and ethical standards for independent fact-checking, and (d) providing tools and infrastructural support to fact-checking organisations.

To prepare the second step, the Commission proposed the creation of a new digital service infrastructure under the Connecting Europe Facility work programme 2019 for the establishment of a European Platform on Disinformation. In this direction, in June 2020, the Commission launched the European Digital Media Observatory (EDMO), but some civil society actors fear that the inclusion of a former Facebook executive and several organisations sponsored by Google could impact EDMO’s missions, especially those that involve monitoring how online platforms follow the EU’s code of practice on disinformation, which was signed in 2018 by the online platforms Facebook, Google, and Twitter, Mozilla, by advertisers and parts of the advertising industry (who presented their roadmaps to implementation), and then by Microsoft in 2019 and by TikTok in 2020. The strengthened Code of Practice in 2021 will evolve towards a co-regulatory instrument as outlined in the Digital Services Act (DSA).Footnote 85

11.2.4.8.1.3 Fostering Online Accountability.

With a view to increasing trust and accountability online, the Commission committed to promoting the use of voluntary online systems allowing the identification of suppliers of information based on trustworthy electronic identification and authentication means.Footnote 86

11.2.4.8.1.4 Harnessing New Technologies.

The Commission committed to making full use of the Horizon 2020 framework programme to mobilise new technologies and explore the possibility of additional support for tools that combat disinformation, accelerating time-to-market for high-impact innovation activities, and encouraging the partnering of researchers and businesses.

Furthermore, in the proposal for the Horizon Europe programme, the Commission has proposed dedicating efforts to: (a) safeguard democratic and economic stability through the development of new tools to combat online disinformation, (b) better understand the role of journalistic standards and user-generated content in a hyper-connected society, and (c) support next generation internet applications and services including immersive and trustworthy media, social media, and social networking.Footnote 87

11.2.4.8.2 Secure and Resilient Election Processes.

The second set of actions addresses manipulative and disinformation tactics employed during electoral periods, in order to enable secure and resilient election processes, for which the Communication proposed to initiate continuous dialogue to support Member States in the management of risks to the democratic electoral process from cyber-attacks and disinformation.Footnote 88

11.2.4.8.3 Fostering Education and Media Literacy.

The third set of actions focuses on fostering education and media literacy. The life-long development of critical and digital competences is crucial to reinforce the resilience of our societies to disinformation. The Communication proposed new actions to this end, including: (a) supporting the provision of educational materials by independent fact-checkers and civil society organisations to schools and educators, (b) organising a European Week of Media Literacy, (c) exploring the possibility of adding media literacy to the criteria used by the Organisation for Economic Co-operation and Development in its comparative reports on international student assessment, and (d) further encouraging the implementation of ongoing initiatives on digital skills, education, and traineeship.Footnote 89

11.2.4.8.4 Support for Quality Journalism as an Essential Element of a Democratic Society.

The fourth set of actions aims to support quality journalism as an essential element of a democratic society. Quality news media and journalism can uncover and dilute disinformation, providing citizens with high-quality and diverse information. The Communication proposed to enhance the transparency and predictability of state aid rules for the media sector by making available an online repository of decisions. The Communication also proposed to launch a call in 2018 for the production and dissemination of quality news content on EU affairs through data-driven news media and to explore increased funding opportunities to support initiatives promoting media freedom and pluralism, and the modernisation of news rooms.Footnote 90

11.2.4.8.5 Countering Internal and External Disinformation Threats through Strategic Communication.

In line with the April communication, the European Commission worked to ensure the internal coordination of its communication activities aiming at tackling disinformation. In this context, it created an internal Network against Disinformation, the primary purpose of which is to enable its services to better detect harmful narratives, support a culture of fact-checking, provide fast responses, and strengthen more effective positive messaging.

The Commission reinforced cooperation with the European Parliament and East Strategic Communication Task Force through a tripartite forum that aimed to operationalise the institutions’ respective efforts in countering disinformation ahead of the 2019 European elections. In May 2021, the Commission presented a Guidance to strengthen the Code of Practice on Disinformation, which aims to address gaps and shortcomings and create a more transparent, safe, and trustworthy online environment and lays out the cornerstones for a robust monitoring framework for its implementation. It also aims at evolving the existing Code of Practice towards a co-regulatory instrument foreseen under the DSA, offering an early opportunity to design appropriate measures to address systemic risks related to disinformation stemming from the functioning and use made of the platforms’ services in view of the anticipated DSA risk assessment and mitigation framework.Footnote 91

More recently, the assembly, grouping the signatories of the Code and new signatories willing to subscribe to and take on commitments under the 2021 Code, met on 8 July 2021 to start the process that will strengthen the Code of Practice on Disinformation. Members of the Assembly have approved a vade mecum on the organisation and functioning of the process to shape and draft the strengthened Code of Practice on Disinformation.Footnote 92

11.2.5 Rules Emerging Both from International Regulations and Recommendations regarding Freedom of Expression and Other Individual and Collective Rights Involved in the Creation and Dissemination of False Information

A proactive and efficient intervention to combat the disinfodemic phenomenon, respectful of conflicting rights and in particular the freedom of information, needs to first determine what kind of information requires an intervention without unduly affecting freedom of expression, and second, which obligations and responsibilities both public and private sectors have.

Now, little has been done to distinguish when disinformation requires some kind of intervention and to determine what kind of measures must be taken in accordance with the specific obligations that each sector has and even the different responsibilities of those who disseminate it, particularly when it is information issued by public officials and candidates for elected positions, on whom, depending on each legal system, ethical and legal obligations weigh that are even more intense when the dissemination of these manifestations can cause serious damage.

Referring to the emerging rules of the inter-American human rights system and based on what we have stated previously, Botero Marino indicates that there are two important standards applicable to any state effort aimed at prohibiting or regulating fake news:

  1. (a) the simple objective falsehood of the expression cannot be the object of prohibition or state sanction, and

  2. (b) it is only justified to restrict freedom of expression in order to protect the rights of third parties or public order understood from a narrowly defined democratic perspective.

She adds that to attend precisely to the protection of the rights of third parties and of democratic public order without undue restrictions, a test must be carried out that involves various relevant factors to define whether to act on false expressions – and to what extent – and eventually on those who generate and distribute them.Footnote 93

Complementing this perspective, Sunstein affirms that what should be considered includes: (a) the intentionality of the declarant (if he/she is knowingly lying, if he/she is negligent, if he/she doubts and is not interested in corroborating the veracity, if he/she believes that what he says is true); (b) the damage that its expression can cause in terms of its magnitude (serious, moderate, slight, non-existent); (c) the probability of it taking place (certain, probable, improbable, impossible); and (d) the moment in which it will occur (immediate, near future, not near future, distant future).Footnote 94

To measure the probability of damage, it is also relevant to consider the social position of the speaker, since it clearly affects his or her credibility (a minister of health referring to treatments against COVID-19 is not the same as what can be sustained by any person lacking the support of specific studies). And of course, in order to measure the personal consequences on the speaker, it will be necessary to assess whether specific legal obligations weigh on the speaker to tell the truth or to take any special care before issuing the statement, either because of his or her function or because of having contractually submitted to certain obligations (e.g., the acceptance of uses and conditions of a social network, which establishes guidelines on how to express oneself and the consequences that can lead to the administrator of the network restricting or deleting an account, as happened recently with the Facebook sanction of former president Donald Trump, whose account was suspended when he was in office and for two years because the expressions dumped on the network incited the assault on the Capitol).

Based on the combination of these parameters, it is clear that actions by both the states and the private sector must be based on a correct weighing of the conflicting interests at stake and, based on this, decide whether and to what extent it is feasible to intervene. In this direction, mathematical formulas similar to the one Susi proposes for the application of the right to be forgotten online (RTBF 2.0), should help.Footnote 95

Facing the crisis generated by disinformation has provoked either state responses that seek to punish administratively or criminally their authors or broadcasters and private responses, based on self-regulations deployed both by social networks (e.g., various tools used by Facebook to facilitate detection and report, such as ranking and flagging; a search initiative, a news literacy campaign, and a monitoring of the work done in News Feed) and other internet service providers (e.g., Google’s Fast Check), including non-governmental organisations that check the quality of content that may have a high social impact and warn about its possible falsehood (e.g., at a global level, the Fact Checking Network).

Botero Marino warns that the use of algorithms as a means to combating the spread of fake news is only compatible with freedom of expression, as stated by the special rapporteurs for freedom of expression in their 2017 Joint Declaration, if it: (a) is based on transparent and objectively justifiable criteria, (b) fully guarantees the right to due process of the interested parties, and (c) includes the participation of citizen initiatives dedicated to fact-checking based on transparent codes of ethics.

She also remembers that prohibiting or regulating fake news at the state level is not only not the least restrictive remedy for freedom of expression to combat this particular type of speech, but it is also structurally incompatible with the very functioning of democracy, since in it, the best remedy for lies is free democratic debate. And about fact-checking, she says that, although it is a growing trend, there are still very few organisations that are dedicated to it. For this reason, only for some news is its corresponding fact-check available on the network. There is also the problem that, even with the fact-checking of news, it is not always easy to find it on the World Wide Web. For this reason, different organisations dedicated to fact-checking have designed practical guides so that (in theory) anyone can independently verify any news, such as those of FactChecker.org, AfricaCheck, FullFact, and Les Décodeurs. Even PolitiFact.com offers a blacklist of web portals dedicated to spreading deliberately false information. The common idea behind these guides is, to use the expression of the Supreme Court of the US, that ‘each person is his own guardian of the truth’.Footnote 96

11.2.6 Actions Adopted by the Media and Civil Society Organisations to Detect and Report Fake News

As mentioned earlier, both the media and civil society organisations have got to work to help detect and consequently combat fake news. Among various initiatives, the creation of websites, extensions for browsers, and applications dedicated to such a combat stand out. In order to provide a few samples, it is worth mentioning the following initiatives:

  1. (a) Maldito Bulo (Damn Hoax): website, more profiles on Twitter, Facebook, and Instagram where false news and hoaxes are reported. Internet users can help by tagging posts with #MalditoBulo.

  2. (b) El Tragabulos (The Gobbler): section of the Verne supplement of the newspaper El País aimed at denying hoaxes and identifying false news and information that has become viral through the internet and social networks.

  3. (c) WikiTribune: promoted by one of the co-founders of Wikipedia, it is a collaborative editing project that offers journalistic information verified by experts before publication.

  4. (d) The Trust Project Mirror: various important media outlets (among them, the newspapers El Mundo and El País) are responsible for alerting the major search engines that the information being disseminated is false.

  5. (e) Canales de Vost Spain (Vost Spain channels): Vost Spain is the platform for emergency volunteers, and it becomes essential for events and emergency notifications.

  6. (f) Salud sin Bulos (Health without hoaxes): this initiative is thanks to the Association of Researchers in eHealth and offers reliable and responsible information on such sensitive issues as health alerts, medicines, and news that affects the health field.

  7. (g) Fake News Detector: extension for Firefox and Chrome browsers that facilitates marking and detecting fake news and reporting it on social network profiles.

  8. (h) Oficina de Seguridad del Internauta (Internet Security Office): warnings and denials of hoaxes are constantly posted on the website, in addition to reporting threats and cybersecurity advice.

  9. (i) Facterbot: provides the user with a bot that periodically sends information about fake news through Facebook Messenger.

11.2.7 Actions Adopted by Social Media Platforms: Three Examples

The first approach by social media companies facing the aforementioned electoral crises of the last decade was very moderate, but the persistence of information disorders, exacerbated by the appearance of COVID-19, provoked important changes and led them to be proactive in fighting against the pernicious effects of fake news. Before the pandemic, platforms had been taking a rather passive role in the face of content causing disinformation by focusing their intervention on the authenticity of the accounts and the visibility of the publications. The leaders of these companies reasonably argued that they should avoid becoming judges of the public debate and also explained that from a commercial and political perspective, arbitrating content was already a costly and time-consuming task, and trying to evaluate the veracity of information would end up confronting social media platforms with political parties, governments, and civil society. In short, in the pre-pandemic world, social media approached disinformation delicately, with a minimal and exceptional intervention in content.

But the COVID-19 emergency constituted the perfect storm and forced the change, and the platforms began to focus on identifying inauthentic actions and judging the content through moderation processes that rely heavily on human analysis through people trained to make complex decisions on a massive scale, without sufficient context, and under a lot of emotional pressure.

The human intervention was inevitable because although algorithms can detect suspicious behaviour, spam, and certain types of content – especially regarding videos and photos – they do not know how to solve dilemmas inherent to the context of an expression, much less decide on the veracity of information. This mode of damage control has been characterised by inconsistent rules and processes with slow transparency progresses.Footnote 97

In this direction, as has been stated in recent studies, these kinds of stakeholders adopted four types of responses:

  1. (a) Awareness actions: political actions by the platforms, including partnerships with other actors, the promotion of educational campaigns, digital and media literacy, and so on. These are actions that seek to build a positive ecosystem regarding disinformation, to empower actors and strategies that are expected to combat the phenomenon. By definition, they do not involve changes to the platforms’ codes or policies.

  2. (b) Changes to the code of the platforms: including changes to the algorithms for recommendations, visibility, and scope of content.

  3. (c) Policy changes and moderation actions: including actions that implement internal policies or external mandates for the removal of content reported as illegal or contrary to community guidelines.

  4. (d) Transparency and public relations actions: revealing information about the operation of the platform, generated by the platforms and by independent researchers, and abstract or wishful statements from platforms on how to deal with the challenge of disinformation.Footnote 98

We will now discuss the evolution of the measures adopted, respecting the public interest exception contained in their community rules, by three of the most important social networks: Facebook, Twitter, and YouTube, especially following the two previously mentioned studies.Footnote 99

11.2.7.1 Facebook: Meta

Before the pandemic, the publication of fake news through this platform did not violate its community rules. Therefore, pressured by the facts, governments, civil society organisations, and public opinion, it began to take action against fake news through specific policies, but weighing the public interest at stake with the risk of harm, taking into account international human rights standards and considering in each case several factors, such as the particularities of each country (i.e., if it recognises the freedom of the press, if an election is in progress, or if the country is at war), the content of what was said (i.e., posts that can lead to violence and harm), and if it relates to governance or politics. However, the use of the public interest exception highlighted some inconsistencies.Footnote 100

In the case of publications related to politicians, Mark Zuckerberg alluded to the newsworthiness exception when he stated:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.Footnote 101

In any case, he admitted that Facebook would begin to label the content it leaves online, in the application of the exception, and that it would allow people to share it in order to condemn it.

Until 2025 Facebook had the following approach. Generally, it did not remove content, but rather reduced its distribution relying on advanced third-party verification (concrete fact-checking by civil society organisations), alerting the person who saw it or who was going to share it, penalising by reducing its visibility in the news section, and taking restricting actions over the accounts that created or shared it repeatedly. Once a piece of content was flagged, it activated proactive detection methods for possible duplicates.

The company also:

  1. (a) bans the spreading of content that shows, confesses, or encourages acts of physical harm to human beings, false calls to emergency services, or participation in high-risk viral challenges in the community rules, more precisely in the section ‘coordinating harm and publicizing crime’,

  2. (b) as reported in blog posts, deleted disinformation about health that may contribute to imminent physical harm, and

  3. (c) focuses significant efforts on identifying ‘information or influence operations’, such as coordinated actions that use automated and human accounts to amplify content, intimidate other users, and capture conversations and trends.Footnote 102

Since January 2020, when COVID-19 was not yet a global pandemic, the company announced that it was controlling disinformation in labelling, filtering, and content removal and would disable the option to search for virus-related augmented reality effects on Instagram; excluded content or organic accounts related to COVID-19 from the recommendations; and alerted people who had interacted with content (with ‘likes’, reactions, or comments) that had been discredited, including recommendations for reliable information, practices that were extended to Instagram.

The company adopted various measures concerning paid advertising: the content to be advertised was previously presented by the contracting parties and approved or not by the platform (in the second case if there were possible violations to the community rules), so there is no content removal but unauthorised content.

In this direction, before the declaration of the pandemic, the platform banned ads that sought to create panic or denote urgency regarding supplies and products linked to COVID-19 and that guaranteed the cure or prevention of the virus, and in March 2020 banned sales of COVID-19 masks, hand sanitisers, disinfecting wipes, and test kits, but from August allowed the promotion of non-medical masks (subject to compliance with certain requirements), hand sanitisers, and disinfecting wipes.

With the approval of COVID-19 vaccines, they began to remove false claims (such as the ones that affirmed that the vaccines are a cure for the virus) and conspiracy theories about them that could cause harm in accordance with the opinion of public health experts and authorities. It also banned advertised content where vaccination kits were sold or accelerated access to the vaccine was promoted, when the vaccines were near to being available. And similar actions were taken regarding advertised content that may affect the availability of biosafety items, even if the advertisements are not misinformative.

11.2.7.2 Twitter [X]

Twitter traditionally sustained that it should not become an arbiter of the truth owing to its nature as an open platform. However, before 2020, Twitter changed the community policies and confronted misinformation by focusing more on the activity of actors (accounts), rather than on the content, concentrating its efforts on avoiding automated use of the platform for manipulation purposes. The company began to include some prohibitions related to disinformation in the context of electoral processes, prohibiting the posting of misleading information about how to participate, voter suppression, and intimidating content, and false or misleading information about political affiliations. Then, the platform introduced the possibility for users to report tweets that violate this Election Integrity policy (later renamed Civic Integrity policy, facing the 2020 US census and presidential election campaign).

Twitter turned from being the platform that intervened the least in user content to the most active, but applying the public interest test, considering that a piece of content is of public interest to be weighed against the possible risk and severity of damage ‘if it constitutes a direct contribution to the understanding or debate of a matter that concerns the whole public’. In this regard, the community rules consider that the exception is:

  1. (a) more likely to apply in some cases (e.g., hate speech or harassment, or when the tweet includes a call to action), and

  2. (b) less likely to apply (e.g., in cases of terrorism, violence, or electoral integrity, when the tweet is directed at government officials or when it provides important context for ongoing geopolitical events).

No exceptions are made for multimedia content related to child sexual exploitation, non-consensual nudity, and violent sexual assault on victims.Footnote 103

In October 2019, Twitter addressed the use of the exception for world leaders in order to ensure people’s right to know about their leaders and demand accountability. When using the exception, a filter can be put in place.Footnote 104

In March 2020, the platform warned that it would expand its definition of harm to include content that goes directly against the instructions of authorised sources in global and local public health.Footnote 105 Then, it prohibited tweets that deny the recommendations of the authorities with the intention that people act against them, encourage breaking social distancing, recommend ineffective treatments (even if they are not harmful or if they are shared humorously), recommend harmful treatments, deny scientific data on the transmission of the disease, make statements that incite action, cause panic, discomfort, or disorder on a large scale, include false or misleading information about the diagnosis or make claims that specific groups or nationalities are not susceptible or are more susceptible to the virus. In this direction, the platform acted on tweets from presidents Bolsonaro and Trump and even temporarily suspended the account of Trump’s lawyer, Rudy Giuliani, for similar reasons.Footnote 106

The exceptions only apply to tweets from elected and government officials, and candidates for political office. If a tweet covered by this exception is kept online, Twitter adds a warning or filter that provides context about the breach of the rules (before someone can view the content, an interstitial warning appears). This also makes the tweet not recommended by the Twitter algorithm and limits the ability of users to interact with what is posted. Twitter applied the exception to a tweet by Donald Trump for violation of its rules on disinformation and COVID-19.Footnote 107 However, the implementation of these measures was not consistent and has interpretative issues.Footnote 108

In May 2020, Twitter updated its approach related to COVID-19 information and explained that it will act on content based on misleading information, controversial statements, and unverified claims, but will remove content only in cases of misleading information with a propensity for severe harm, and for all other cases will use labels (phrases below the tweet, accompanied by an exclamation mark, referring to reliable information) and filters or notices (hiding the questioned tweet to notify the user that the content differs from the guidance of public health experts).Footnote 109

Regarding ads, since April 2020, Twitter has banned sensationalist or panic-inducing content and advertisements with inflated prices. Likewise, it prohibited the sale of masks and sanitisers and did not allow mention of vaccines, treatments, or test kits, except for information published by media outlets that the platform exempts under its policy of political advertisements (afterwards, in August 2020, these bans were limited to medical masks).

Since the acquisition of the company by Elon Musk, freedom of expression has been strengthened in the policies of use, as expressed by its new owner, when he stated in a tweet published in November 2022: ‘New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max deboosted & demonetised, so no ads or other revenue to Twitter. You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet.’Footnote 110

11.2.7.3 YouTube

YouTube’s general strategy against disinformation rests on three principles:

  1. (a) the preservation of content on the platform unless it violates its community guidelines,

  2. (b) the possibility of monetising publications is a privilege, and

  3. (c) the videos must meet exacting standards for the platform to recommend them.Footnote 111

The Community Guidelines do not properly regulate the public interest exception, but the platform’s chief executive officer has argued that they could keep content posted by politicians that violates its rules because it is important for people to know what they think. Other exceptions are granted to certain kinds of speech with educational, documentary, scientific or artistic content, but are not applicable in cases of harmful or dangerous content, violent or graphic content, incitement to hatred or violence, publications that promote, recommend, or assert that the use of harmful substances or treatments may have health benefits, and that manipulate or modify content that seeks to deceive the user and may involve serious risk of blatant harm.Footnote 112

Trying to ensure authentic behaviour, YouTube prohibits publications that deceptively seek to redirect users to other sites or artificially increase engagement metrics (views, comments, ‘likes’), as well as creating playlists with misleading titles or descriptions that make users believe that they will watch different videos than those in the list. Furthermore, only quality videos can be monetised, but in each intervention with the published content, YouTube always analyses the context to understand the intent of a video.Footnote 113

The eruption of the COVID-19 pandemic led to the creation of a new ‘policy on medical misinformation related to Covid-19’, and owing to its ‘serious risk of flagrant harm’ it removed content that discloses erroneous medical information that contradicts the guidelines of the World Health Organization or local health authorities regarding the treatment, prevention, diagnosis, or transmission of the virus. With the first violation, the user receives a warning, and after a strike is added, three strikes results in permanent removal of the channel.Footnote 114

11.3 Brief Conclusions

As initially stated, fake news is not a new phenomenon, but in the heat of the unthinkable technological changes that have taken place in recent decades (especially in the last two), they have become a huge problem for societies in general and particularly in the functioning of democracies, constituting an enormous challenge both for the international community and for governments, companies, and civil society.

The regulation of fake news became essential for various political reasons and owing to public health, both events that emerged with great force in the last half of the past decade, but regulation is still incomplete and in some cases confusing.

What is not in doubt, beyond the conflicts when dealing with freedom of expression, is the need for the regulation that has existed up to now, whether international, national, or from the private sector, to deepen, with clear and homogeneous guidelines across the globe, good practices, and intelligent initiatives, and in this, international cooperation becomes essential.

The task is not easy, but it is not impossible to carry out either. Let us keep working on it.

Footnotes

1 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, ‘Disinformation and freedom of opinion and expression’, UN doc. A/HRC/47/25, 13 April 2021, 2.

2 On Trump’s election, see Y. Benkler, R. Faris, and H. Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford: Oxford University Press, 2018), and about the Italian elections and their impact on the populist vote in Trentino and Tirol del Sur during the 2018 general elections, see M. Cantarella, N. Fraccaroli, and R. Volpe, ‘Does fake news affect voting behavior?’, Dipartimento di Economia Marco Biagi, Università Degli Studi di Modena e Reggio Emilia, Working Paper Series No. 146, June 2019, http://155.185.68.2/wpdemb/0146.pdf.

3 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, ‘Disinformation and freedom of opinion and expression’, UN doc. A/HRC/47/25, 13 April 2021.

4 Among various examples, two Argentine national deputies recommended the use of chlorine dioxide to combat the virus despite the contra-indications published by the World Health Organization; a Ministry of Brazil published a video stating that the use of masks was not effective to combat the virus and even that it was harmful to health, and the former President Jair Bolsonaro maintained that hydroxychloroquine, azithromycin, ivermectin, anitta, zinc, and Vitamin D were successful in treating COVID-19, this despite the lack of scientific evidence in this regard and disregarding the existence of side effects. The president of Guatemala, Alejandro Giammattei, did the same; he assured at an international event that ivermectin is a substitute for the vaccine against COVID-19. In the same sense, in Chile, the Mayor of Santiago falsely argued that there is no scientific evidence to show that public transport is a source of COVID-19 contagion (Center for Studies on Freedom of Expression and Access to Information, ‘Are public officials’ lies unsustainable or do they have far reaching effects? A study on the obligations of the government and its officials to prevent the proliferation of disinformation’, August 2021, www.palermo.edu/Archivos_content/2021/cele/papers/Disinformation-and-public-officials.pdf).

5 According to the Collins dictionary, a deepfake is ‘a way of adding a digital image or video over another image or video, so that it appears to be part of the original […] an image or video that has been changed in this way’ – Collins, ‘Definition of ‘deepfake’, www.collinsdictionary.com/dictionary/english/deepfake#:~:text=uncountable%20noun,a%20real%20threat%20to%20democracy.

6 V. E. Melo, Fake News (Buenos Aires: La Ley, 2022), pp. 6–8.

7 S. Warren and L. Brandeis, ‘The right to privacy’ (1890), 4 Harvard Law Review 5, 193–220, at 193.

9 C. Cortés and L. F. Isaza, ‘The new normal? Disinformation and content control on social media during COVID-19’, CELE, Palermo University, April 2021, www.palermo.edu/Archivos_content/2021/cele/papers/Disinformation-and-Content-Control.pdf, 11.

10 Cancelling, calling out, calling in, and boycotting are all distinct actions. Calling in is speaking to an individual privately about their perceived harmful or problematic actions or opinions. Calling out is criticising an individual or organisation publicly, usually on social media. This kind of action can be a useful tactic when calling in fails, or when the problematic individual or company is too powerful or distant from someone to be called in (e.g., a celebrity). Calling out is seen by some activists as an effective way to create change and reduce harm while allowing an offending individual to learn, grow, and change. Boycotting is withholding financial support from a company in order to force change within that company’s policies or practices. Once demands have been met, support is resumed. Cancelling is a collective attempt to ruin the reputation and livelihood of an individual or organisation in response to a problematic or harmful action or opinion. All these different actions share a commonality: their overarching goal is to reduce perceived harm to an individual or class, thereby creating social change. In other words, these four different terms draw attention to actions or opinions that may be hurting people in order to ultimately phase those behaviours or opinions out of the culture. Cancel culture is taking away support for an individual, their career, popularity, and/or fame because of something they’ve said or done that’s considered unacceptable. To be cancelled is effectively to be boycotted, with the intent that the person will be ostracised and no longer benefit financially, personally, or professionally from their elevated position; in short, culturally blocked from having a prominent public platform or career. The cancel culture affects more public figures than politicians and public officials (ideological positions and partisan alignments play a huge role here) and obviously non-public persons. Most of the time, people are cancelled because he or she is a public figure with influence over a huge audience and what has been done or said is alleged to have caused harm to a particular person, group of people, or community. Some see participating in cancel culture as the most effective way to hold public figures to account, especially if no other lawful way appears to be working. By bringing the grievance public, it forces the accused’s employers and others to confront the situation and distance themselves from the perpetrator. In other words, it rebalances the power gap between those with huge audiences and the people or communities who could be negatively affected. However, others believe cancel culture is more of a ‘mob mentality’ that’s gone out of control. (E. Bunch, ‘The cancel-culture glossary for cancelling, boycotting, calling out, and calling in’, 23 July 2020, www.wellandgood.com/cancel-culture-examples/#:~:text=Canceling%2C%20calling%20out%2C%20calling%20in,demographic%2C%20thereby%20creating%20social%20change).

11 Cambridge dictionary, ‘information’, https://dictionary.cambridge.org/dictionary/english/information.

12 Cambridge dictionary, ‘disinformation’, https://dictionary.cambridge.org/dictionary/english/disinformation.

13 Cambridge dictionary, ‘misinformation’, https://dictionary.cambridge.org/dictionary/english/misinformation.

14 Cambridge dictionary, ‘fake news’, https://dictionary.cambridge.org/dictionary/english/fake-news.

15 Cambridge dictionary, ‘propaganda’, https://dictionary.cambridge.org/dictionary/english/propaganda.

16 K. Santos-D’Amorim and M. K. Fernandes de Oliveira Miranda, ‘Misinformation, disinformation, and malinformation: clarifying the definitions and examples in disinfodemic times’ (2021) 26 Revista eletrônica de biblioteconomia e ciência da informação e76900, 7, www.redalyc.org/journal/147/14768130011/movil/.

17 C. Wardle and H. Derakhshan, Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking (Strasbourg: Council of Europe, 2017), p. 5.

18 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, ‘Disinformation and freedom of opinion and expression’, UN Doc. A/HRC/47/25, 13 April 2021, 3.

19 Cambridge dictionary, ‘hard news’, https://dictionary.cambridge.org/dictionary/english/hard-news.

20 Cambridge dictionary, ‘breaking news’, https://dictionary.cambridge.org/dictionary/english/breaking-news.

21 Cambridge dictionary, ‘soft news’, https://dictionary.cambridge.org/dictionary/english/soft-news.

22 M. D. Molina, S. S. Sundar, T. Le, and D. Lee. ‘“Fake news” is not simply false information: a concept explication and taxonomy of online content’ (2021) 65 American Behavioral Scientist 2, 180–212.

23 V. Verma et al., ‘Fake news detection on Twitter’, in S. Tiwari et al. (eds.), Advances in Data and Information Sciences: Proceedings of ICDIS 2022 (New York: Springer, 2022), pp. 141–9.

24 AECOC Innovation hub, ‘Facebook lucha contra las “fake news”’ (Facebook fights against fake news), www.aecoc.es/innovation-hub-noticias/facebook-lucha-contra-las-fake-news/.

25 D’Amorim and Fernandes de Oliveira Miranda, ‘Misinformation, disinformation, and malinformation’.

26 Molina et al., ‘“Fake news” is not simply false information’.

28 C. Wardle, ‘How you can help transform the internet into a place of trust’, TED talk, 2019, www.youtube.com/watch?v=3iD4HwJ-67Q&t=89s.

29 D. A. Barclay, ‘Confronting the wicked problem of fake news: a role for education?’ Cicero Foundation Great Debate Paper, n.18/03, May 2018, www.cicerofoundation.org/lectures/Donald_Barclay_Confronting_Fake_News.pdf, 6.

30 Vereinigung Bildender Künstler v. Austria, Application no. 68354/01, Judgment of 25 January 2007, para. 33.

31 C. Botero Marino, ‘La regulación estatal de las llamadas “noticias falsas” desde la perspectiva del derecho a la libertad de expresión’, in OAS, Office of the Special Rapporteur for Freedom of Expression, ‘Libertad de expresión: A 30 años de la Opinión Consultiva sobre la colegiación obligatoria de periodistas’ (Freedom of expression: 30 years after the Advisory Opinion on the compulsory registration of journalists) (2017), www.oas.org/es/cidh/expresion/docs/publicaciones/OC5_ESP.PDF, 69.

32 Benkler, Faris, and Roberts, propose the following definitions: (a) ‘Propaganda’ and ‘disinformation’: manipulating and misleading people intentionally to achieve political ends; (b) ‘Network propaganda’: the ways in which the architecture of a media ecosystem makes it more or less susceptible to disseminating these kinds of manipulations and lies, (c) ‘Misinformation’: publishing wrong information without meaning to be wrong or having a political purpose in communicating false information (Benkler, Faris, and Roberts, Network Propaganda, p. 24).

33 Molina et al., ‘“Fake news” is not simply false information’.

34 Information that is intentionally false, and are often malicious stories propagating conspiracy theories. Although this type of content shares characteristics with polarised and sensationalist content, where information can be characterised as highly emotional and highly partisan, it differs in important features.

35 Commentary and other similar editorial pieces are different from real news in that the journalist does not abide by principles of opinion-free reporting typically seen in hard news stories.

36 Misinformation, or unintentional false reporting from professional news media organisations.

37 This content is not completely false but is characterised by its ‘goodness-of-fit with a particular ideology’. A common strategy utilised by polarised content is the use of highly emotional and inflammatory content. These assertions typically lack evidence and are based on appeals to emotion and pre-existing attitudes.

38 An intentionally false story meant to be perceived as unrealistic, which uses journalistic style as a parody of the style, to mock issues and individuals in the news, or to disseminate a prank.

39 Persuasive information is decomposed in native advertisement (defined as promotional material of persuasive intent, often masked as a news article), and promotional (political or non-political) content.

40 Citizen journalism includes two categories: blogs and websites from citizens, civil societies, and organisations with content originally created by such users, on one side, and on the other, subsites of professional journalism sites providing a forum for citizen reporting (e.g., CNN’s iReport). Features to distinguish this content from other online information include message aspects such as not adhering to journalistic style of reporting and verification. Additionally, its contents are more emotionally driven and subjectively reported. Last, there are essential structural components. For instance, the URL and ‘about us’ section are giveaways of the site being a blog, personal site, or a site specifically meant for citizen reporting.

41 Molina et al., ‘“Fake news” is not simply false information’.

43 ‘Study on the impact of fake news in Spain’, carried out by market research company ‘Simple Lógica’ and the investigation group in psychology of testimony of the Complutense University of Madrid, www.amic.media/media/files/file_352_3350.pdf.

44 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, ‘Disinformation and freedom of opinion and expression’, UN Doc. A/HRC/47/25, 13 April 2021.

45 UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015.

46 D. Teyssou, J. Posetti, and S. Gregory, ‘Monitoring and fact-checking responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 66–86.

47 D. Teyssou, J. Posetti, and S. Gregory, ‘Investigative responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 87–95.

48 T. Meyer et al., ‘Legislative, pre-legislative and policy responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 97–112.

49 T. Meyer, C. Hanot, and J. Posetti, ‘National and international counter-disinformation campaigns’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 113–22.

50 D. Teyssou, J. Posetti, and K. Bontcheva, ‘Electoral-specific responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 123–39.

51 T. Meyer, C. Hanot, and J. Posetti, ‘Curatorial responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 141–68.

52 S. Gregory et al., ‘Technical/algorithmic responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 169–89.

53 K. Bontcheva, ‘Demonetisation and advertising-linked responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 190–201.

54 J. Posetti, ‘Normative and ethical responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 203–17.

55 K. Bontcheva, J. Posetti, and D. Teyssou, ‘Educational responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 218–230.

56 D. Maynard, D. Teyssou, and S. Gregory, ‘Empowerment & credibility labelling responses’, in UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, pp. 231–47.

58 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, ‘Disinformation and freedom of opinion and expression during armed conflicts’, UN Doc. A/77/288, 12 August 2022.

59 OAS, ‘Joint declaration on freedom of expression and fake news, disinformation and propaganda’ (2017), www.oas.org/en/iachr/expression/showarticle.asp?artID=1056&lID=1.

60 OAS, ‘Joint declaration on media independence and diversity in the digital age’ (2018), www.oas.org/en/iachr/expression/showarticle.asp?artID=1100&lID=1.

61 OAS, ‘Twentieth anniversary joint declaration: challenges to freedom of expression in the next decade’ (2019), www.oas.org/en/iachr/expression/showarticle.asp?artID=1146&lID=1.

62 OAS, ‘Joint declaration on freedom of expression and elections in the digital age’ (2020), www.oas.org/en/iachr/expression/showarticle.asp?artID=1174&lID=1.

63 OAS, ‘Joint declaration on politicians and public officials and freedom of expression’ (2021), www.oas.org/en/iachr/expression/showarticle.asp?artID=1214&lID=1.

64 New York Times Co. v. Sullivan, 376 US 254 (1964).

65 In the same way, the Joint Declaration on Freedom of Expression and Fake News, Disinformation and Propaganda states: ‘5.

Journalists and Media Outlets a. The media and journalists should, as appropriate, support effective systems of self-regulation whether at the level of specific media sectors (such as press complaints bodies) or at the level of individual media outlets (ombudsmen or public editors) which include standards on striving for accuracy in the news, including by offering a right of correction and/or reply to address inaccurate statements in the media. b. Media outlets should consider including critical coverage of disinformation and propaganda as part of their news services in line with their watchdog role in society, particularly during elections and regarding debates on matters of public interest.

66 I/A Court HR, Ríos et al. V. Venezuela, ‘Preliminary objections, merits, reparations and costs, judgment of 28 January 2009’, Series C No. 194, www.corteidh.or.cr/docs/canes/articulos/seriec_194_esp.pdf and Perozo et al. v. Venezuela, ‘Preliminary objections, merits, reparations and costs, judgment of 28 January 2009’, Series C No. 195, www.corteidh.or.cr/docs/cases/articulos/seriec_195_esp.pdf.

68 IACHR, Resolution 1/2020, Pandemic and Human Rights in the Americas, 10 April 2020, www.oas.org/es/cidh/decisiones/pdf/Resolucion-1-20-es.pdf, para. 34.

69 Article 286 of the Colombian Penal Code punishes whoever ‘consigns a falsehood or totally or partially reveals the truth’. Protecting the public faith: the Argentine Penal Code contemplates the crime of ideological falsification of a public instrument in its article 293 to ‘whoever inserts or causes to insert in a public instrument false statements, concerning a fact that the document must prove, so that it may result in damage’. Brazil: article 299 of the Penal Code. Chile: article 193. Costa Rica: article 360. Guatemala: article 322. Mexico: article 243 of the Penal Code, and article 63 of the General Law of Administrative Responsibilities of 2016 that creates the crime of contempt of anyone who answers requests with false information, does not respond, or deliberately delays. Panama: article 366 of the Penal Code. Paraguay articles 246 and 250 of the Penal Code. Peru: article 428 of the Penal Code. Dominican Republic: article 145 to 147 of the Penal Code.

70 Argentina: law 19,549 of administrative procedure, articles 7 and 14. Colombia Article 88 of the Code of Administrative Procedure and Administrative Litigation on the presumption of legality and veracity. Costa Rica: Article 132 General Law of Public Administration. Peru: article 51 of the Law of General Administrative Procedure. Dominican Republic: Articles 9 and 10 of Law 107 of 2013 describe the presumption of validity of administrative acts in Article 9. 10 and 14.

71 Argentina: article 23 of the Framework Law for the Regulation of National Public Employment (No. 25,164) and article 2 of the Law of Ethics in the Exercise of Public Function. Uruguay: Article 13 of Law No. 19,823. Chile: Law 20,880 on Probity in the Civil Service and Prevention of Conflicts of Interest regulates the principle of probity. Colombia regulates ‘honesty’ in the Integrity Code of the Colombian Public Service. Mexico: article 12 of the Code of Ethics of Public Servants of the Federal Government and article 7 of the General Law of Administrative Responsibilities of July 18, 2016. Panama: article 8.4. of the Code of Conduct for Public Servants of the Comptroller General. Peru; In its articles 6.2 and 6.5 of Law 27,815, it requires probity and truthfulness.

72 Thus, in the Ibero-American Charter of Ethics and Integrity in the Public Function, approved in July 2018 by the XVIII Ibero-American Conference of Ministers of Public Administration and State Reform, it is expressly indicated in the Preamble that the Charter seeks to promote a system of solid integrity that strengthens what they understood to be the common practice in the Ibero-American administrations: the honest behavior of public servants. One of the objectives detailed in the document is ‘Promote the integrity of those responsible and public servants’ (https://clad.org/wp-content/uploads/2020/10/1-Carta-Iberoamericana-de-%C3%89tica-e-Integridad-en-la-Function%C3%B3n-P%C3%BAblica-2018-CLAD.pdf). For its part, the UN Office on Drugs and Crime under its Education for Justice initiative in line with its Global Program for the Implementation of the Doha Declaration prepared the document ‘Public Integrity and Ethics’: ‘The integrity of the public sector – or public integrity – refers to the use of powers and resources entrusted to the public sector effectively, honestly and for public purposes.’ Additional related ethical standards that the public sector is expected to uphold include transparency, accountability, efficiency, and competence. UN staff members, for example, must ‘maintain the highest standards of efficiency, competence and integrity’, and integrity is defined in the United Nations Staff Regulations and Rules, which includes, but is not limited to: ‘Probity, impartiality, fairness, honesty and truthfulness in all matters that affect their work and status’ (Staff Regulations of the United Nations, UN Doc. ST/SGB/2023/1, 1 January 2023, Regulation 1.2 (b)).

73 Argentina: resolution 20/2005 of the Ministry of Health and Environment and Provision 4980/2005 of the ANMAT. Colombia, Consumer Statute Law 1480 of 2011. Costa Rica: Article 113 of Law No. 7472 on the Promotion of Competition and Effective Defense of the Consumer and its respective regulations. Guatemala: Articles 15 to 17 of the Advertising Ethics Code, clearly self-regulated by the private sector. Panama, Articles 161 and 171 of Executive Decree 189 of 1999 for advertising broadcast on radio and television. Peru: articles 3 and 4 of Law No. 27808 Law of Transparency and Access to Public Information and Title V (Penalty Regime) and article 121 of Law No. 26842, General Health Law. Dominican Republic: Resolution 016-2014 and Technical Regulations of the Ministry of Public Health.

74 There were complaints about manoeuvres of this type in the presidential elections in Mexico (2000 and 2006), Colombia (2014), and the Dominican Republic (2015), and in the referendum to modify the Bolivian Constitution regarding presidential re-election (2016), where it was appealed to unsubstantiated allegations of illegal campaign finance; corruption in the concession of public works or questions of the private life of the candidates.

75 Mexico: article 247.2 of the General Law of Electoral Institutions and Procedures; Argentina: article 140 of the National Electoral Code; Honduras: arts. 146 and 148 of the Electoral and Political Organizations Law, in addition to an extraordinary agreement of 2018 of the National Electoral Chamber by which a ‘Register of social media accounts and official websites of candidates, political groups and highest authorities is created’; Brazil: Article 9 of Resolution 23.610 / 2019 of the Superior Electoral Court and article 323 of the Electoral Code; Peru: Article 42 of Law No. 28094 – Law of Political Organizations.

76 UNESCO, ITU, and Broadband Commission for Sustainable Development, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression (Geneva: ITU, 2020), https://unesdoc.unesco.org/ark:/48223/pf0000379015, p. 263.

77 C. Sunstein, Liars: Falsehoods and Free Speech in an Age of Deception (Oxford: Oxford University Press, 2021), pp. 131, 133.

78 OAS, ‘Joint declaration on freedom of expression and gender justice’ (2022), www.oas.org/en/iachr/expression/showarticle.asp?artID=1233&lID=1.

79 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, ‘Disinformation and freedom of opinion and expression during armed conflicts’, UN Doc. A/77/288, 12 August 2022.

80 European Parliament resolution of 15 June 2017 on online platforms and the digital single market (2016/2276(INI)), EU Doc. P8_TA(2017)0272, para. 36.

81 European Council, Conclusions of European Council meeting 22 Match 2018, EU doc. EUCO 1/18, 23 March 2018, para. 7.

82 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, ‘Tackling online disinformation: a European approach’, EU Doc. COM/2018/236 final. Document 52018DC0236, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0236.

83 Report from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on the implementation of the Communication ‘Tackling online disinformation: a European approach’, EU Doc. COM/2018/794 final.

84 Footnote Ibid., section 2.1.1.

85 Footnote Ibid., section 2.1.2.

86 Footnote Ibid., section 2.1.3.

87 Footnote Ibid., section 2.1.4.

88 Footnote Ibid., section 2.2.

89 Footnote Ibid., section 2.3.

90 Footnote Ibid., section 2.4.

91 Footnote Ibid., section 2.5.

92 European Commission, ‘Shaping Europe’s digital future. The 2022 Code of Practice on Disinformation’, https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation.

93 C. Botero Marino, ‘La regulación estatal de las llamadas “noticias falsas” desde la perspectiva del derecho a la libertad de expresión’, in OAS, Office of the Special Rapporteur for Freedom of Expression, ‘Libertad de expresión: A 30 años de la Opinión Consultiva sobre la colegiación obligatoria de periodistas’ (Freedom of expression: 30 years after the Advisory Opinion on the compulsory registration of journalists) (2017), www.oas.org/es/cidh/expresion/docs/publicaciones/OC5_ESP.PDF, 65–83.

94 Sunstein, Liars, pp. 131, 133.

95 The Internet Balancing Formula is a mathematical instrument to increase the rational and transparent aspects of balancing conflicting human rights online. It is based on the relative weight and intensity of conflicting rights. The numerical value of these rights is arrived at by applying mathematical scales to various input elements. This formula is easy to use and could be applied globally by private online stakeholders (M. Susi, ‘The internet balancing formula’ (2019) 25 European Law Journal 2, 198–212.).

96 Botero Marino, ‘La regulación estatal de las llamadas “noticias falsas” desde la perspectiva del derecho a la libertad de expresión’, 81–2.

97 C. Cortés and L. F. Isaza, ‘The new normal? Disinformation and content control on social media during COVID-19’, CELE, Palermo University, April 2021, www.palermo.edu/Archivos_content/2021/cele/papers/Disinformation-and-Content-Control.pdf.

98 R. Álvarez Ugarte and A. Del Campo, ‘Fake news on the Internet: actions and reactions of three platforms’, CELE, Palermo University, February 2021, www.palermo.edu/Archivos_content/2021/cele/papers/Fake-news-on-the-Internet-2021.pdf.

99 Footnote Ibid. and Cortés and Isaza, ‘The new normal’.

100 In September 2020, Facebook deleted for violation of its policy against incitement to violence a picture of a candidate (who was later elected) to the United States Congress for the state of Georgia where she posed with a rifle along with three photographs of Democratic politicians. That month, Facebook deleted a post by a Louisiana congressman promising the use of deadly force against protesters. In contrast, the platform kept online a message from President Trump in which he argued, almost threateningly, that ‘when the looting starts, the shooting starts’ also in the context of a public demonstration. COVID-19 has also shed light on inconsistencies in the use of policies: a video by Jair Bolsonaro in which he spoke in favor of the use of hydroxychloroquine in the treatment of the virus was removed from Facebook in March 2020, but a Trump post with the same message was kept up. However, another message in which Trump compares COVID-19 to the flu was removed from Facebook.

101 BBC, ‘Facebook to tag “harmful” posts as boycott widens’, 27 June 2020, www.bbc.com/news/business-53196487.

104 J. C. Wong, ‘Twitter lays out rules for world leaders amid pressure to rein in Trump’, 16 October 2019, www.theguardian.com/us-news/2019/oct/15/twitter-explains-how-it-handles-world-leaders-amid-pressure-to-rein-in-trump.

105 A. Hern, ‘Twitter to remove harmful fake news about coronavirus’, 19 March 2020, The Guardian, www.theguardian.com/world/2020/mar/19/twitter-to-remove-harmful-fake-news-about-coronavirus.

106 T. Porter, ‘Twitter deleted a tweet by Rudy Giuliani for spreading coronavirus misinformation’, 29 March 2020, Business Insider, www.businessinsider.com/coronavirus-twitter-deletes-giuliani-tweet-for-spreading-misinformation-2020-3.

107 G. Kolata and R. C. Rabin, ‘“Don’t be afraid of Covid”, Trump says, undermining public health messages’, 5 October 2020, New York Times, www.nytimes.com/2020/10/05/health/trump-covid-public-health.html.

108 While a couple of tweets from the then president of the US, Donald Trump, did not warrant any response from Twitter – a decision that would be explained by reasons of public interest – the platform did act on a tweet from the former president of Brazil, Jair Bolsonaro, and even temporarily suspended the account of Trump’s lawyer, Rudy Giuliani, for similar reasons. Thread published by President Trump on 21 March 2020, where he promotes unproven medical treatments against COVID-19. This tweet promoting the use of hydroxychloroquine, published on March 27, 2020, was deleted by Twitter and earned Giuliani the temporary suspension of his account. In November 2020, the vice-president of the company argued, in a hearing before the US Congress, that Donald Trump would not have these protections when his term ends, which is questionable.

109 Y. Roth and N. Pickles, ‘Updating our approach to misleading information’, 11 May 2020, X blog, https://blog.x.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.

111 YouTube, ‘Community guidelines’, www.youtube.com/howyoutubeworks/policies/community-guidelines/.

Accessibility standard: Unknown

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×