We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to manage the issue of diversity of regulatory vision, States may, to some extent, harmonize substantive regulation—eliminating diversity. This is less likely than States determining unilaterally or multilaterally to develop manageable rules of jurisdiction, so that their regulation applies only in limited circumstances. The fullest realization of this “choice of law” solution would involve geoblocking or other technology that divides up regulatory authority according to a specified, and a perhaps agreed, principle. Geoblocking may be costly and ultimately porous, but it would allow different communities to effectuate their different visions of the good in the platform context. To the extent that the principles of jurisdiction are agreed, and are structured to be exclusive, platforms would have the certainty of knowing the requirements under which they must operate in each market. Of course, different communities may remain territorial states, but given the a-territorial nature of the internet, it may be possible for other divisions of authority and responsibility to develop. Cultural affinity, or political perspective, may be more compelling as an organizational principle to some than territorial co-location.
Global platforms present novel challenges. They serve as powerful conduits of commerce and global community. Yet their power to influence political and consumer behavior is enormous. Their responsibility for the use of this power – for their content – is statutorily limited by national laws such as Section 230 of the Communications Decency Act in the US. National efforts to demand and guide appropriate content moderation, and to avoid private abuse of this power, is in tension with concern in liberal states to avoid excessive government regulation, especially of speech. Diverse and sometimes contradictory national rules responding to these tensions on a national basis threaten to splinter platforms, and reduce their utility to both wealthy and poor countries. This edited volume sets out to respond to the question whether a global approach can be developed to address these tensions while maintaining or even enhancing the social contribution of platforms.
The study of dis/misinformation is currently in vogue, however with much ambiguity about what the problem precisely is, and much confusion about the key concepts that are brought to bear on this problem. My aim of this paper is twofold. First, I will attempt to precisify the (dis/mis)information problem, roughly construing it as anything that undermines the “epistemic aim of information.” Second, I will use this precisification to provide a new grounded account of dis/misinformation. To achieve the latter, I will critically engage with three of the more popular accounts of dis/misinformation which are (a) harm-based, (b) misleading-based, and (c) ignorance-based accounts. Each engagement will lead to further refinement of these key concepts, ultimately paving the way for my own account. Finally, I offer my own information hazard-based account, which distinguishes between misinformation as content, misinformation as activity, and disinformation as activity. By introducing this distinction between content and activity, it will be shown that my account is erected on firmer conceptual/ontological grounds, overcoming many of the difficulties that have plagued previous accounts, especially the problem of the proper place of intentionality in understanding dis/misinformation. This promises to add clarity to dis/misinformation research and to prove more useful in practice.
Based on declassified documents from the archives of the Czechoslovak intelligence agency (StB) and the contemporary press, this article delves into the working mechanisms of the Communist secret services in Latin America in the 1960s. Specifically, focusing on the case of the newspaper Época, it deals with the production of articles aimed at discrediting the capitalist states and their publication in the press through local collaborators. The link between the StB and the Uruguayan newspaper, which claimed to be politically and economically independent, was pragmatic and, for a time, helped both parties to achieve their political ends. While the StB managed to obtain a space where it could carry out its operations, Época's motivations were not only ideological but also economic and related to the urgent desire of the non-Communist Left to get funding for its political activities.
Who should decide what passes for disinformation in a liberal democracy? During the COVID-19 pandemic, a committee set up by the Dutch Ministry of Health was actively blocking disinformation. The committee comprised civil servants, communication experts, public health experts, and representatives of commercial online platforms such as Facebook, Twitter, and LinkedIn. To a large extent, vaccine hesitancy was attributed to disinformation, defined as misinformation (or data misinterpreted) with harmful intent. In this study, the question is answered by reflecting on what is needed for us to honor public reason: reasonableness, the willingness to engage in public discourse properly, and trust in the institutions of liberal democracy.
This chapter reviews the regulation of disinformation from an African human rights’ law perspective, focusing on the right to freedom of expression and the right to vote. It provides an overview of the African regional law framework, specifically the African Charter on Human and Peoples Rights of 1981 (the African Charter) and corresponding jurisprudence. The chapter also analyses the way in which freedom of expression and disinformation laws have been applied in African countries, the aim being to contextualize and illustrate how African regional law plays out at the domestic level, but with an emphasis on the position in South Africa.
Despite Kenya’s transformative and progressive 2010 Constitution, it is still grappling with a hybrid democracy, displaying both authoritarian and democratic traits. Scholars attribute this status to several factors, with a prominent one being the domination of the political order and wielding of political power by a few individuals and families with historical ties to patronage networks and informal power structures. The persisting issues of electoral fraud, widespread corruption, media harassment, weak rule of law and governance challenges further contribute to the hybrid democracy status. While the 2010 Constitution aims to restructure the state and enhance democratic institutions, the transition process is considered incomplete, especially since the judiciary’s role of judicial review is mostly faced with the difficult task of countering democratic regression. Moreover, critical institutions such as the Independent Electoral and Boundaries Commission (IEBC) have faced criticism due to corruption scandals and perceptions of partisanship, eroding public trust in their ability to oversee fair elections effectively.
A broad consensus has emerged in recent years that although rumours, conspiracy theories and fabricated information are far from new, in the changed structure and operating mechanisms of the public sphere today we are faced with something much more challenging than anything to date, and the massive scale of this disinformation can even pose a threat to the foundations of democracy. However, the consensus extends only to this statement, and opinions differ considerably about the causes of the increased threat of disinformation, whom to blame for it, and the most effective means to counter it. From the perspective of freedom of speech, the picture is not uniform either, and there has been much debate about the most appropriate remedies. It is commonly argued, for example, that the free speech doctrine of the United States does not allow for effective legal action against disinformation, while in Europe there is much more room for manoeuvre at the disposal of the legislator.
When analysing disinformation, commentators often focus on major platforms and their influence on content circulation. Some also examine institutional media, especially broadcasting. Platforms and media are both relevant; both are important in the communicative infrastructure underlying public speech. Whatever the focus, there is an almost endless examination of issues and suggestions regarding what to do about disinformation. Commentary defines false or misleading information in different ways, compares it with historic practices of propaganda and persuasion, considers the emergence of large language models and content they could generate, documents varied legal responses, and considers what should be done. Here, I examine something that is relevant to that work but often not considered directly.
In the digital age, the landscape of information dissemination has undergone a profound transformation. The traditional boundaries between information and news have become increasingly blurred as technology allows anyone to create and share content online. The once-excusive realm of authoritative media outlets and professional journalists has given way to a decentralized public square, where individuals can voice their opinions and reach vast audiences regardless of mainstream coverage. The evolution of the digital age has dismantled the conventional notions of journalism and reshaped how news is obtained and interpreted. This shift has paved the way for the proliferation of fake news and online disinformation. The ease with which false information can be fabricated, packaged convincingly and rapidly disseminated to a wide audience has contributed to the rise of fake news. This phenomenon gained global attention during the 2016 US presidential election, prompting nations worldwide to seek strategies for tackling this issue.
State responses to the recent ‘crisis’ caused by misinformation in social media have mainly aimed to impose liability on those who facilitate its dissemination. Internet companies, especially large platforms, have deployed numerous techniques, measures and instruments to address the phenomenon. However, little has been done to assess the importance of who originates disinformation and, in particular, whether some originators of misinformation are acting contrary to their preexisting obligations to the public. My view is that it would be wrong to attribute only to social media a central or exclusive role in the new disinformation crisis that impacts the information ecosystem. I also believe that disinformation has different effects depending on who promotes it – particularly whether it is promoted by a person with a public role. Importantly, the law of many countries already reflects this distinction – across a variety of contexts, public officials are obligated both to affirmatively provide certain types of information, and to take steps to ensure that information is true. In contrast, private individuals rarely bear analogous obligations; instead, law often protects their misstatements, in order to prevent censorship and promote public discourse.
The 2024 presidential election in the USA demonstrates, with unmistakable clarity, that disinformation (intentionally false information) and misinformation (unintentionally false information disseminated in good faith) pose a real and growing existential threat to democratic self-government in the United States – and elsewhere too. Powered by social media outlets like Facebook (Meta) and Twitter (X), it is now possible to propagate empirically false information to a vast potential audience at virtually no cost. Coupled with the use of highly sophisticated algorithms that carefully target the recipients of disinformation and misinformation, voter manipulation is easier to accomplish than ever before – and frighteningly effective to boot.
Many legal and political commentators dubbed Donald Trump’s false claim that he was the actual victor of the 2020 American presidential election, ‘the Big Lie’. No matter how he complained and dissembled, he lost. After losing the 2020 election, Trump went on a fundraising binge, asking his supporters to give to his legal defense fund so that he could litigate the results of the 2020 election, which he fraudulently claimed he had won. According to the House of Representatives’ January 6 Select Committee, this fund did not exist. As Select Committee member Congresswoman Zoe Lofgren put it, ‘the Big Lie was also a big rip-off’. Because the 2020 presidential election was not stolen, and the legal defense fund he touted was nonexistent, Trump’s post-2020 election fundraising was a fraud within a fraud – giving rise to a reasonable argument that it violated the federal wire fraud statute and also constituted common law fraud.
The issue of mass disinformation on the Internet is a long-standing concern for policymakers, legislators, academics and the wider public. Disinformation is believed to have had a significant impact on the outcome of the 2016 US presidential election. Concern about the threat of foreign – mainly Russian – interference in the democratic process is also growing. The COVID-19 pandemic, which reached global proportions in 2020, gave new impetus to the spread of disinformation, which even put lives at risk. The problem is real and serious enough to force all parties concerned to reassess the previous European understanding of the proper regulation of freedom of expression.
The ‘marketplace of ideas’ metaphor tends to dominate US discourse about the First Amendment and free speech more generally. The metaphor is often deployed to argue that the remedy for harmful speech ought to be counterspeech, not censorship; listeners are to be trusted to sort the wheat from the chaff. This deep skepticism about the regulation of even harmful speech in the USA raises several follow-on questions, including: How will trustworthy sources of information fare in the marketplace of ideas? And how will participants know whom to trust? Both questions implicate non-regulatory, civil-society responses to mis- and disinformation. This chapter takes on these questions, considering groups and institutions that deal with information and misinformation. Civil society groups cannot stop the creation of misinformation – but they can decrease its potential to proliferate and to do harm. For example, advocacy groups might be directly involved with fact-checking and debunking misinformation, or with advancing truthful or properly contextualized counter-narratives. And civil society groups can also help strengthen social solidarity and reduce the social divisions that often serve as fodder for and drivers of misinformation.
In April 2023, the Government of India amended a set of regulations called the Information Technology Rules, which primarily dealt with issues around online intermediary liability and safe harbour. Until 2023, these rules required online intermediaries to take all reasonable efforts to ensure that ‘fake, false or misleading’ information was not published on their platforms. Previous iterations of these rules had already been challenged before the Indian courts for imposing a disproportionate burden on intermediaries, and having the effect of chilling online speech. Now, the 2023 Amendment went even further: it introduced an entity called a ‘Fact Check Unit’, to be created by the government. This government-created unit would flag information that – in its view – was ‘fake, false or misleading’ with respect to ‘the business of the central government’. Online intermediaries were then obligated to make reasonable efforts to ensure that any such flagged information would not be on their platforms. In practical terms, what this meant was that if intermediaries did not take down flagged speech, they risked losing their safe harbour (guaranteed under the Information Technology Act).
Chile’s regulation of fake news dates back nearly a century. The initial instance occurred in 1925 during a constitutional crisis that resulted in the drafting of a new constitution. At that time, a de facto government issued a decree making it illegal to publish and distribute fake news. The second regulatory milestone occurred during the dictatorship of General Augusto Pinochet with the inclusion of provisions related to defamation in the 1980 constitution. Defamation involved spreading false information through mass media to unjustly tarnish someone’s reputation. Upon the restoration of democracy in Chile in 1990, these stipulations were permanently abolished from the legal system. Since 2001, the judicial pursuit of disinformation in Chile has been limited to exceptional means such as the State Security Law or, indirectly, through the right to rectification.
In today's digital age, the spread of dis- and misinformation across traditional and social media poses a significant threat to democracy. Yet repressing political speech in the name of truth can also undermine democratic values. This volume brings together prominent legal scholars from democracies worldwide to explore and evaluate different regulatory approaches for addressing this complex problem – all taking into account that the cure must not be worse than the disease. Using a comparative lens, the book offers important and novel insights into methods ranging from national regulation of politicians' speech to empowering civil-society groups that are well-positioned to blunt the effects of disinformation and misinformation. The book also provides solutions-oriented recommendations for policymakers, judges, legal practitioners, and scholars seeking to promote democratic values by encouraging free political speech while combatting disinformation and misinformation. This title is also available as Open Access on Cambridge Core.
The digital revolution has transformed the dissemination of messages and the construction of public debate. This article examines the disintermediation and fragmentation of the public sphere by digital platforms. Disinformation campaigns, that aim at assuming the power of determining a truth alternative to reality, highlight the need to enhance the traditional view of freedom of expression as negative freedom with an institutional perspective. The paper argues that freedom of expression should be seen as an institution of freedom, an organizational space leading to a normative theory of public discourse. This theory legitimizes democratic systems and requires proactive regulation to enforce its values.
Viewing freedom of expression as an institution changes the role of public power: this should not be limited to abstention but instead has a positive obligation to regulate the spaces where communicative interactions occur. The article discusses how this regulatory need led to the European adoption of the Digital Services Act (DSA) to correct DPs through procedural constraints. Despite some criticisms, the DSA establishes a foundation for a transnational European public discourse aligned with the Charter of Fundamental Rights and member states’ constitutional traditions.
Malgré l'attention accordée à l'enjeu de la mésinformation au cours des dernières années, peu d’études ont examiné l'appui des citoyens pour les mesures visant à y faire face. À l'aide de données récoltées lors des élections québécoises de 2022 et de modèles par blocs récursifs, cet article montre que l'appui aux interventions contre la mésinformation est élevé en général, mais que les individus ayant une idéologie de droite, appuyant le Parti conservateur du Québec et n'ayant pas confiance dans les médias et les scientifiques sont plus susceptibles de s'y opposer. Ceux qui ne sont pas préoccupés par l'enjeu, priorisent la protection de la liberté d'expression ou adhèrent aux fausses informations y sont aussi moins favorables. Les résultats suggèrent que dépolitiser l'enjeu de la mésinformation et travailler à renforcer la confiance envers les institutions pourraient augmenter la légitimité perçue et l'efficacité de notre réponse face à la mésinformation.