To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. The book carefully parses attacks against social media coming from both the political left and right to demonstrate how most of these critiques are overblown or without empirical support. The work analyzes regulations directed at social media in the United States and European Union, including efforts to amend Section 230 of the Communications Decency Act. It argues that many of these proposals not only raise serious free-speech concerns, but also likely have unintended and perverse public policy consequences. Killing the Messenger concludes by identifying specific regulations of social media that are justified by serious, demonstrated harms, and that can be implemented without jeopardizing the profoundly democratizing impact social media platforms have had on public discourse. This title is also available as open access on Cambridge Core.
With formal international organizations (IOs) facing gridlock and informal IOs proliferating, cooperation in the twenty-first century looks different than it did in previous eras. Global governance institutions today also face additional challenges, including a fragmented information environment where publics are increasingly vulnerable to misinformation and disinformation. What do these trends portend for international politics? One way to answer this question is to return to a core ingredient of a well-functioning IO—information provision—and ask how such changes affect efficiency. Viewed through this lens, we see decline in some arenas and adaptation in others. Formal IOs are struggling to retain relevance as their weak policy responses and ambiguous rules create space for competing signals. The proliferation of informal institutions, on the other hand, may represent global governance evolution, as these technocratic bodies are often well-insulated from many political challenges. Yet even if global governance retains functionality, the legitimacy implications of such trends are troubling. IO legitimacy depends in part on process, and from this standpoint, the informational gains of informal governance must be weighed against losses of accountability and transparency. Ultimately, evaluating the normative implications of these trends requires making judgments about the preferred legitimizing principles for global governance.
The first goal of this chapter is to argue that the press as an institution is entitled to special solicitude under the First Amendment, not only because it is textually specified in the Constitution or because it serves important roles such as checking public and private power, but because it can contribute to the marketplace of ideas in ways that a healthy democracy needs. In other words, the press as an institution can provide an important link between the First Amendment’s epistemic and democratic values. The chapter’s second goal is to provide a rough and preliminary sketch of the relationship between press freedom, violence, and public discourse. Some elements seem straightforward enough. Violence and harassment obstruct the press’s function, including its traditional role in constituting and shaping public discourse. Distrust, disinformation, violence, and press degradation exist in a mutually reinforcing ecosystem. And even as violence shapes the media, the media shapes the social conditions, understandings, and practice of violence in return. Journalism, albeit in different ways than legal interpretation, “takes place on a field of pain and death,” to repurpose Robert Cover’s famous phrase – not only in describing it but in making it real. This, it should go without saying, is no excuse for violence against media members. The point is, rather, that a healthy press can be a bulwark not only for knowledge and democracy but against the kinds of private and public violence that threaten both.
Disinformation is a growing epistemic threat, yet its connection to understanding remains underexplored. In this paper, I argue that understanding – specifically, understanding how things work and why they work the way they do – can, all else being equal, shield individuals from disinformation campaigns. Conversely, a lack of such understanding makes one particularly vulnerable. Drawing on Simion’s (2023) characterization of disinformation as content that has a disposition to generate or increase ignorance, I propose that disinformation frequently exploits a preexisting lack of understanding. I consider an important objection – that since understanding is typically difficult to acquire, we might rely on deferring to experts. However, I argue that in epistemically polluted environments, where expertise is systematically mimicked, deference alone provides no reliable safeguard. I conclude by briefly reflecting on strategies for addressing these challenges, emphasizing both the need for promoting understanding and for cleaning up the epistemic environment.
The spread of disinformation, such as false and fabricated content, as amplified by the expansion of artificial intelligence systems, has captured the attention of policymakers on a global scale. However, addressing disinformation leads constitutional democracies towards questions about the scope of freedom of expression as the living core of a democratic society. If, on the one hand, this constitutional right has been considered a barrier to public authorities’ interferences to limit the circulation of disinformation, on the other hand, the spread of fabricated content and manipulative techniques, including deepfakes, has increasingly questioned liberal views. This constitutional challenge is further enriched by the role of online platforms which, by mediating speech in their online spaces, are essential tiles of a mosaic picturing the potential regulatory strategies and the limit of public enforcement to tackle disinformation. Within this framework, this work argues that the European constitutional approach to tackle disinformation has defined a unique model on a global scale. The European Union has developed a strategy that combines procedural safeguards, risk regulation, and co-regulation, as demonstrated by initiatives such as the Digital Services Act, the Strengthened Code of Practice on Disinformation, and the Artificial Intelligence Act. Positioned between liberal and illiberal models, the European approach proposes an alternative constitutional vision to address disinformation based on risk mitigation and the collaboration between public and private actors.
Social media has a complicated relationship with democracy. Although social media is neither democratic or undemocratic, it is an arena where different actors can promote or undermine democratization. Democracy is built on a foundation of norms and trust in institutions, where elections are the defining characteristic of the democratic process. This chapter outlines two ways disinformation campaigns can undermine democratic elections’ ability to ensure fair competition, representation, and accountability. First, disinformation narratives try to influence elections, by spreading false information about the voting process, or targeting voters, candidates, or parties to alter the outcome. Second, disinformation undermines trust in the integrity of the electoral process (from the ability to have free and fair elections, to expectations about the peaceful transfer of power), which can then erode trust in democracy. Prior work on social media has often focused on foreign election interference, but now it’s important to realize electoral disinformation is increasingly originating from domestic, not foreign, political actors. An important threat to democracy thus comes from within — namely, disinformation about democratic elections that is being created and shared by political leaders and elites, increasing the reach and false credibility of such false narratives.
This commentary examines the dual role of artificial intelligence (AI) in shaping electoral integrity and combating misinformation, with a focus on the 2025 Philippine elections. It investigates how AI has been weaponised to manipulate narratives and suggests strategies to counteract disinformation. Drawing on case studies from the Philippines, Taiwan, and India—regions in the Indo-Pacific with vibrant democracies, high digital engagement, and recent experiences with election-related misinformation—it highlights the risks of AI-driven content and the innovative measures used to address its spread. The commentary advocates for a balanced approach that incorporates technological solutions, regulatory frameworks, and digital literacy to safeguard democratic processes and promote informed public participation. The rise of generative AI tools has significantly amplified the risks of disinformation, such as deepfakes, and algorithmic biases. These technologies have been exploited to influence voter perceptions and undermine democratic systems, creating a pressing need for protective measures. In the Philippines, social media platforms have been used to spread revisionist narratives, while Taiwan employs AI for real-time fact-checking. India’s proactive approach, including a public misinformation tipline, showcases effective countermeasures. These examples highlight the complex challenges and opportunities presented by AI in different electoral contexts. The commentary stresses the need for regulatory frameworks designed to address AI’s dual-use nature, advocating for transparency, real-time monitoring, and collaboration between governments, civil society, and the private sector. It also explores the criteria for effective AI solutions, including scalability, adaptability, and ethical considerations, to guide future interventions. Ultimately, it underscores the importance of digital literacy and resilient information ecosystems in supporting informed democratic participation.
Referendums trigger both enthusiasm and scepticism among constitutional theorists. The positive case for the referendum emphasises its ability to give the people a consequential voice on salient decisions, its capacity to break political deadlock and enrich the political agenda, its educational civic role, as well its anti-establishment and even radically democratic potential. The negative case, conversely, focuses on the referendum’s divisiveness, propensity to be manipulated by elites, and tendency to produce ill-informed decisions. Between these two poles are various attempts to evaluate the referendum as a complement to rather than replacement for representative institutions, and to stipulate conditions for its proper institutionalisation. The spread of sophisticated disinformation campaigns and the growing interest in deliberative innovations such as mini-publics also raise new questions about referendum design, safeguards, and legitimacy. This chapter takes seriously the democratic case for the use of referendums while revisiting three areas of concern: the ambiguous place of referendums within democratic theory, including its relationship to direct, representative, and deliberative democracy; the complex interplay between referendums as majoritarian tools and minority rights; and the novel opportunities and distinct challenges to informed voter consent in the digital era, not least disinformation and fake news.
Disinformation and the spread of false information online have become a defining feature of social media use. While this content can spread in many ways, recently there has been an increased focus on one aspect in particular: social media algorithms. These content recommender systems provide users with content deemed ‘relevant’ to them but can be manipulated to spread false and harmful content. This chapter explores three core components of algorithmic disinformation online: amplification, reception and correction. These elements contain both unique and overlapping issues and in examining them individually, we can gain a better understanding of how disinformation spreads and the potential interventions required to mitigate its effects. Given the real-world harms that disinformation can cause, it is equally important to ground our understanding in real-world discussions of the topic. In an analysis of Twitter discussions of the term ‘disinformation’ and associated concepts, results show that while disinformation is treated as a serious issue that needs to be stopped, discussions of algorithms are underrepresented. These findings have implications for how we respond to security threats such as a disinformation and highlight the importance of aligning policy and interventions with the public’s understanding of disinformation.
While peacekeeping operations have always been heavily dependent on host-state support and international political backing, changes in the global geopolitical and technological landscapes have presented new forms of state interference intended to influence, undermine, and impair the activities of missions on the ground. Emerging parallel security actors, notably the Wagner Group, have cast themselves as directly or implicitly in competition with the security guarantee provided by peacekeepers, while the proliferation of mis- and disinformation and growing cybersecurity vulnerabilities present novel challenges for missions’ relationships with host states and populations, operational security, and the protection of staff and their local sources. Together, these trends undermine missions’ efforts to protect civilians, operate safely, and implement long-term political settlements. This essay analyzes these trends and the dilemmas they present for in-country UN officials attempting to induce respect for international norms and implement their mandates. It describes nascent strategies taken by missions to maintain their impartiality, communicate effectively, and maintain the trust of those they are charged with protecting, and highlights early good practices for monitoring and analyzing this new operation environment, for reporting on and promoting human rights, and for operating safely.
The spread of false and misleading information, hate speech, and harassment on WhatsApp has generated concern about elections, been implicated in ethnic violence, and been linked to other disastrous events across the globe. On WhatsApp, we see the activation of what is known as the phenomenon of hidden virality, which characterizes how unvetted, insular discourse on encrypted, private platforms takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this book chapter, we discuss what factors contribute to the activation of hidden virality on WhatsApp while answering the following questions: 1) To what extent and how do WhatsApp’s sociotechnical affordances encourage the sharing of mis- and disinformation on the platform, and 2) How do WhatsApp’s users perceive and deal with mis- and disinformation daily? Our findings indicate that WhatsApp’s affordance of perceived privacy actively encourages the spread of false and offensive content on the platform, especially when combined with it being impossible for users to report inappropriate content anonymously. Groups in which such content is prominent are tightly controlled by administrators who typically hold dominant cultural positions (e.g., they are senior and male). Users who feel hurt by false and offensive content need to personally ask administrators for its removal. But this is not an easy job, as it requires users to challenge dominant cultural norms, causing them stress and anxiety. Users would rather have WhatsApp take on the burden of moderating problematic content. We close the chapter by situating our findings in relation to cultural and economic power dynamics. We bring attention to the fact that if WhatsApp does not start to take action to reduce and prevent the real-world harm of hidden virality, its affordances of widespread accessibility and encryption will keep promoting its market advantages, leaving the burden of moderating content to fall on minoritized users.
There are two main ways Russian propaganda reaches Japan: (a) the social media accounts of official institutions, such as the Russian Embassy, or Russian state-linked media outlets, such as Sputnik, and (b) pro-Russian Japanese political actors who willingly (or unwillingly) spread disinformation and display a clear pro-Kremlin bias. These actors justify the Russian invasion of Ukraine and repeat the Russian view of the war with various objectives in mind, primarily serving their own interests. By utilizing corpus analysis and qualitative examination of social media data, this article explores how Russian propaganda and a pro-Russian stance are effectively connected with and incorporated into the discursive strategies of political actors of the Japanese Far-Right.
Democratic backsliding, the slow erosion of institutions, processes, and norms, has become more pronounced in many nations. Most scholars point to the role of parties, leaders, and institutional changes, along with the pursuit of voters through what Daniel Ziblatt has characterized as alliances with more extremist party surrogate organizations. Although insightful, the institutionalist literature offers little reflection about the growing role of social technologies in organizing and mobilizing extremist networks in ways that present many challenges to traditional party gatekeeping, institutional integrity, and other democratic principles. We present a more integrated framework that explains how digitally networked publics interact with more traditional party surrogates and electoral processes to bring once-scattered extremist factions into conservative parties. When increasingly reactionary parties gain power, they may push both institutions and communication processes in illiberal directions. We develop a model of communication as networked organization to explain how Donald Trump and the Make America Great Again (MAGA) movement rapidly transformed the Republican Party in the United States, and we point to parallel developments in other nations.
In 2022, Russia invoked Articles V and VI of the Biological and Toxin Weapons Convention (BTWC), requesting a formal meeting to discuss, and subsequent investigation of, alleged U.S.-funded biological weapons laboratories in Ukraine. Such allegations have been dismissed as false by scholars and diplomats alike, many of whom have argued that Russia’s actions represented an abuse of BTWC provisions and risked undermining the Convention. However, few scholars have assessed the implications of Russia’s ongoing efforts to level false allegations in BTWC meetings following the Article V and VI procedures. Using mixed-methods analysis of BTWC meeting recordings, transcripts, and documents, we assessed the volume, consequences, and framing of Russian false allegations at the BTWC Ninth Review Conference. Analysis revealed that discussion of Russian allegations took over three hours and contributed to a stunted Final Document. Additional potential consequences are discussed, including increased division among states parties and the erosion of nonproliferation norms.
In order to manage the issue of diversity of regulatory vision, States may, to some extent, harmonize substantive regulation—eliminating diversity. This is less likely than States determining unilaterally or multilaterally to develop manageable rules of jurisdiction, so that their regulation applies only in limited circumstances. The fullest realization of this “choice of law” solution would involve geoblocking or other technology that divides up regulatory authority according to a specified, and a perhaps agreed, principle. Geoblocking may be costly and ultimately porous, but it would allow different communities to effectuate their different visions of the good in the platform context. To the extent that the principles of jurisdiction are agreed, and are structured to be exclusive, platforms would have the certainty of knowing the requirements under which they must operate in each market. Of course, different communities may remain territorial states, but given the a-territorial nature of the internet, it may be possible for other divisions of authority and responsibility to develop. Cultural affinity, or political perspective, may be more compelling as an organizational principle to some than territorial co-location.
Global platforms present novel challenges. They serve as powerful conduits of commerce and global community. Yet their power to influence political and consumer behavior is enormous. Their responsibility for the use of this power – for their content – is statutorily limited by national laws such as Section 230 of the Communications Decency Act in the US. National efforts to demand and guide appropriate content moderation, and to avoid private abuse of this power, is in tension with concern in liberal states to avoid excessive government regulation, especially of speech. Diverse and sometimes contradictory national rules responding to these tensions on a national basis threaten to splinter platforms, and reduce their utility to both wealthy and poor countries. This edited volume sets out to respond to the question whether a global approach can be developed to address these tensions while maintaining or even enhancing the social contribution of platforms.
The study of dis/misinformation is currently in vogue, however with much ambiguity about what the problem precisely is, and much confusion about the key concepts that are brought to bear on this problem. My aim of this paper is twofold. First, I will attempt to precisify the (dis/mis)information problem, roughly construing it as anything that undermines the “epistemic aim of information.” Second, I will use this precisification to provide a new grounded account of dis/misinformation. To achieve the latter, I will critically engage with three of the more popular accounts of dis/misinformation which are (a) harm-based, (b) misleading-based, and (c) ignorance-based accounts. Each engagement will lead to further refinement of these key concepts, ultimately paving the way for my own account. Finally, I offer my own information hazard-based account, which distinguishes between misinformation as content, misinformation as activity, and disinformation as activity. By introducing this distinction between content and activity, it will be shown that my account is erected on firmer conceptual/ontological grounds, overcoming many of the difficulties that have plagued previous accounts, especially the problem of the proper place of intentionality in understanding dis/misinformation. This promises to add clarity to dis/misinformation research and to prove more useful in practice.
Who should decide what passes for disinformation in a liberal democracy? During the COVID-19 pandemic, a committee set up by the Dutch Ministry of Health was actively blocking disinformation. The committee comprised civil servants, communication experts, public health experts, and representatives of commercial online platforms such as Facebook, Twitter, and LinkedIn. To a large extent, vaccine hesitancy was attributed to disinformation, defined as misinformation (or data misinterpreted) with harmful intent. In this study, the question is answered by reflecting on what is needed for us to honor public reason: reasonableness, the willingness to engage in public discourse properly, and trust in the institutions of liberal democracy.
This chapter reviews the regulation of disinformation from an African human rights’ law perspective, focusing on the right to freedom of expression and the right to vote. It provides an overview of the African regional law framework, specifically the African Charter on Human and Peoples Rights of 1981 (the African Charter) and corresponding jurisprudence. The chapter also analyses the way in which freedom of expression and disinformation laws have been applied in African countries, the aim being to contextualize and illustrate how African regional law plays out at the domestic level, but with an emphasis on the position in South Africa.