To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines the phenomenon of disinformation in the digital era and its implications for freedom of expression. It explores how the rapid dissemination of false, manipulated, and misleading information – termed a ‘disinfodemic’ – poses threats to human rights, democracy, and public trust. The chapter outlines the historical roots of disinformation, the technological factors that enable it, and the responses by public and private actors to mitigate its harmful effects. The chapter differentiates between disinformation (intentional), misinformation (unintentional), and malinformation (genuine information used to harm), while highlighting their diverse forms, such as fake news, deepfakes, and conspiracy theories. Disinformation erodes public trust, affects electoral integrity, threatens public health, and harms individuals’ rights to information and privacy. The chapter emphasises the necessity of finding a balance between combating disinformation and preserving freedom of expression.
Since the Internet’s mainstream inception in the mid-1990s, the global telecommunications network has transformed from one that offered egalitarian promise to a network that often compromises democratic norms. And, as its conceptual linchpin, Internet “openness” provided the potential for technological innovation that could revolutionize both communication and commerce. The regulatory schemes introduced in the early Internet era thus sought to advance openness and innovation in the fledging online world. But, while the times and technologies have since changed, the regulatory frameworks have largely remained the same. Accordingly, this review essay examines the Internet’s regulatory and cultural history to explore how the open values of the information age gave way to our current era of online disinformation. To do so, I reflect upon two early studies of the digital realm that have advanced discourse and scholarship on Internet openness: Lawrence Lessig’s Code: and Other Laws of Cyberspace, Version 2.0, and Christopher Kelty’s Two Bits: The Cultural Significance of Free Software. Informed by these works, along with related digital scholarship, this essay argues that remembering the history of Internet openness reveals how the free access ideals of the Internet’s foundational age have been transformed by the renewed proprietary conventions of our current disinformation era.
As digital technologies transform governance, communication, and public life, human rights frameworks must adapt to new challenges and opportunities. This book explores four fundamental questions: how digitalisation changes the application of human rights, how human rights law can respond to the challenges of digital technology, how freedom of expression applies online, and how vulnerable groups are affected by digitalisation. With contributions from leading scholars, the book combines legal analysis with insights from ethics, environmental education, and medical research. It examines critical topics such as AI regulation, platform accountability, privacy protections, and disinformation, offering an interdisciplinary and international perspective. By balancing different viewpoints, this book helps readers navigate the complexities of human rights in the digital age. It is an essential resource for anyone seeking to understand and shape the evolving landscape of digital rights and governance. This title is also available as open access on Cambridge Core.
This scoping review investigates the complex landscape of fake news research, focusing on its link with attitudinal polarization and identifying key themes in the literature. Our objectives included mapping the main themes in fake news literature, analyzing how these themes connect, examining how polarization is conceptualized across studies, and how fake news and attitudinal polarization are related. Through an extensive theme analysis of fake news research sourced from SCOPUS and Web of Science databases, we identified four major thematic areas: (1) the influence of technologies and platforms on fake news, (2) user engagement and behavioral responses to fake news, (3) fake news characteristics and their social consequences, and (4) strategies for fake news detection and countermeasures. In-depth analysis of 20 selected peer-reviewed papers revealed significant inconsistencies in the operationalization of both fake news and polarization and in the definitions of polarization. Regarding evidence on fake news’ influence on polarization, mixed results are found, with some studies indicating attitude reinforcement, while others find negligible effects. This scoping review highlights the need for standardized methodologies to clarify fake news’ role in attitudinal polarization and societal division, calling for a unified framework in fake news and polarization research to advance understanding of fake news’ societal impact.
This article argues that the environmental contexts of memory are vulnerable to Artificial Intelligence (AI)-generated distortions. By addressing the broader ecological implications for AI’s integration into society, this article looks beyond a sociotechnical dimension to explore the potential for AI to complicate environmental memory and its role in shaping human–environment relations. First, I address how the manipulation and falsification of memory risks undermining intergenerational transmission of environmental knowledge. Second, I examine how AI-generated blurring of boundaries between real and unreal can lead to collective inaction on environmental challenges. By identifying memory’s central role in addressing environmental crisis, this article places emerging debates on memory in the AI era in direct conversation with environmental discourse and scholarship.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. The book carefully parses attacks against social media coming from both the political left and right to demonstrate how most of these critiques are overblown or without empirical support. The work analyzes regulations directed at social media in the United States and European Union, including efforts to amend Section 230 of the Communications Decency Act. It argues that many of these proposals not only raise serious free-speech concerns, but also likely have unintended and perverse public policy consequences. Killing the Messenger concludes by identifying specific regulations of social media that are justified by serious, demonstrated harms, and that can be implemented without jeopardizing the profoundly democratizing impact social media platforms have had on public discourse. This title is also available as open access on Cambridge Core.
With formal international organizations (IOs) facing gridlock and informal IOs proliferating, cooperation in the twenty-first century looks different than it did in previous eras. Global governance institutions today also face additional challenges, including a fragmented information environment where publics are increasingly vulnerable to misinformation and disinformation. What do these trends portend for international politics? One way to answer this question is to return to a core ingredient of a well-functioning IO—information provision—and ask how such changes affect efficiency. Viewed through this lens, we see decline in some arenas and adaptation in others. Formal IOs are struggling to retain relevance as their weak policy responses and ambiguous rules create space for competing signals. The proliferation of informal institutions, on the other hand, may represent global governance evolution, as these technocratic bodies are often well-insulated from many political challenges. Yet even if global governance retains functionality, the legitimacy implications of such trends are troubling. IO legitimacy depends in part on process, and from this standpoint, the informational gains of informal governance must be weighed against losses of accountability and transparency. Ultimately, evaluating the normative implications of these trends requires making judgments about the preferred legitimizing principles for global governance.
The first goal of this chapter is to argue that the press as an institution is entitled to special solicitude under the First Amendment, not only because it is textually specified in the Constitution or because it serves important roles such as checking public and private power, but because it can contribute to the marketplace of ideas in ways that a healthy democracy needs. In other words, the press as an institution can provide an important link between the First Amendment’s epistemic and democratic values. The chapter’s second goal is to provide a rough and preliminary sketch of the relationship between press freedom, violence, and public discourse. Some elements seem straightforward enough. Violence and harassment obstruct the press’s function, including its traditional role in constituting and shaping public discourse. Distrust, disinformation, violence, and press degradation exist in a mutually reinforcing ecosystem. And even as violence shapes the media, the media shapes the social conditions, understandings, and practice of violence in return. Journalism, albeit in different ways than legal interpretation, “takes place on a field of pain and death,” to repurpose Robert Cover’s famous phrase – not only in describing it but in making it real. This, it should go without saying, is no excuse for violence against media members. The point is, rather, that a healthy press can be a bulwark not only for knowledge and democracy but against the kinds of private and public violence that threaten both.
The chapter explores the impact of new technologies on liberal democracy, highlighting both positive and negative dimensions. E-governance, facilitated by information and communication technologies, improves efficiency, transparency, and accountability, reducing the need for physical government visits. AI-supported online voting enhances participation and prevents fraud, while social media and online communities can foster social capital. However, challenges arise as well. Social media algorithms can manipulate information, affecting public opinion, and tech giants’ dominance may influence democratic participation. Increased reliance on digital systems exposes governments to cybersecurity threats, undermining public confidence. Inequality in internet access disenfranchises those without it, leading to voter suppression and declining trust. Algorithms contribute to polarization and filter bubbles, with deepfakes impacting political discourse. In totalitarian contexts, technology aids activism against authoritarian regimes through anonymous communication and encryption. The chapter concludes by advocating strategies for maximizing benefits and minimizing harm, emphasizing digital literacy, citizen education, data privacy regulations, responsible technology use, community empowerment, activism, awareness, and accountability for ethical use by governments and tech companies. Recognizing the importance of both physical and digital connections is crucial for thriving liberal democracies.
Disinformation is a growing epistemic threat, yet its connection to understanding remains underexplored. In this paper, I argue that understanding – specifically, understanding how things work and why they work the way they do – can, all else being equal, shield individuals from disinformation campaigns. Conversely, a lack of such understanding makes one particularly vulnerable. Drawing on Simion’s (2023) characterization of disinformation as content that has a disposition to generate or increase ignorance, I propose that disinformation frequently exploits a preexisting lack of understanding. I consider an important objection – that since understanding is typically difficult to acquire, we might rely on deferring to experts. However, I argue that in epistemically polluted environments, where expertise is systematically mimicked, deference alone provides no reliable safeguard. I conclude by briefly reflecting on strategies for addressing these challenges, emphasizing both the need for promoting understanding and for cleaning up the epistemic environment.
The spread of disinformation, such as false and fabricated content, as amplified by the expansion of artificial intelligence systems, has captured the attention of policymakers on a global scale. However, addressing disinformation leads constitutional democracies towards questions about the scope of freedom of expression as the living core of a democratic society. If, on the one hand, this constitutional right has been considered a barrier to public authorities’ interferences to limit the circulation of disinformation, on the other hand, the spread of fabricated content and manipulative techniques, including deepfakes, has increasingly questioned liberal views. This constitutional challenge is further enriched by the role of online platforms which, by mediating speech in their online spaces, are essential tiles of a mosaic picturing the potential regulatory strategies and the limit of public enforcement to tackle disinformation. Within this framework, this work argues that the European constitutional approach to tackle disinformation has defined a unique model on a global scale. The European Union has developed a strategy that combines procedural safeguards, risk regulation, and co-regulation, as demonstrated by initiatives such as the Digital Services Act, the Strengthened Code of Practice on Disinformation, and the Artificial Intelligence Act. Positioned between liberal and illiberal models, the European approach proposes an alternative constitutional vision to address disinformation based on risk mitigation and the collaboration between public and private actors.
Social media has a complicated relationship with democracy. Although social media is neither democratic or undemocratic, it is an arena where different actors can promote or undermine democratization. Democracy is built on a foundation of norms and trust in institutions, where elections are the defining characteristic of the democratic process. This chapter outlines two ways disinformation campaigns can undermine democratic elections’ ability to ensure fair competition, representation, and accountability. First, disinformation narratives try to influence elections, by spreading false information about the voting process, or targeting voters, candidates, or parties to alter the outcome. Second, disinformation undermines trust in the integrity of the electoral process (from the ability to have free and fair elections, to expectations about the peaceful transfer of power), which can then erode trust in democracy. Prior work on social media has often focused on foreign election interference, but now it’s important to realize electoral disinformation is increasingly originating from domestic, not foreign, political actors. An important threat to democracy thus comes from within — namely, disinformation about democratic elections that is being created and shared by political leaders and elites, increasing the reach and false credibility of such false narratives.
This commentary examines the dual role of artificial intelligence (AI) in shaping electoral integrity and combating misinformation, with a focus on the 2025 Philippine elections. It investigates how AI has been weaponised to manipulate narratives and suggests strategies to counteract disinformation. Drawing on case studies from the Philippines, Taiwan, and India—regions in the Indo-Pacific with vibrant democracies, high digital engagement, and recent experiences with election-related misinformation—it highlights the risks of AI-driven content and the innovative measures used to address its spread. The commentary advocates for a balanced approach that incorporates technological solutions, regulatory frameworks, and digital literacy to safeguard democratic processes and promote informed public participation. The rise of generative AI tools has significantly amplified the risks of disinformation, such as deepfakes, and algorithmic biases. These technologies have been exploited to influence voter perceptions and undermine democratic systems, creating a pressing need for protective measures. In the Philippines, social media platforms have been used to spread revisionist narratives, while Taiwan employs AI for real-time fact-checking. India’s proactive approach, including a public misinformation tipline, showcases effective countermeasures. These examples highlight the complex challenges and opportunities presented by AI in different electoral contexts. The commentary stresses the need for regulatory frameworks designed to address AI’s dual-use nature, advocating for transparency, real-time monitoring, and collaboration between governments, civil society, and the private sector. It also explores the criteria for effective AI solutions, including scalability, adaptability, and ethical considerations, to guide future interventions. Ultimately, it underscores the importance of digital literacy and resilient information ecosystems in supporting informed democratic participation.
Referendums trigger both enthusiasm and scepticism among constitutional theorists. The positive case for the referendum emphasises its ability to give the people a consequential voice on salient decisions, its capacity to break political deadlock and enrich the political agenda, its educational civic role, as well its anti-establishment and even radically democratic potential. The negative case, conversely, focuses on the referendum’s divisiveness, propensity to be manipulated by elites, and tendency to produce ill-informed decisions. Between these two poles are various attempts to evaluate the referendum as a complement to rather than replacement for representative institutions, and to stipulate conditions for its proper institutionalisation. The spread of sophisticated disinformation campaigns and the growing interest in deliberative innovations such as mini-publics also raise new questions about referendum design, safeguards, and legitimacy. This chapter takes seriously the democratic case for the use of referendums while revisiting three areas of concern: the ambiguous place of referendums within democratic theory, including its relationship to direct, representative, and deliberative democracy; the complex interplay between referendums as majoritarian tools and minority rights; and the novel opportunities and distinct challenges to informed voter consent in the digital era, not least disinformation and fake news.
Disinformation and the spread of false information online have become a defining feature of social media use. While this content can spread in many ways, recently there has been an increased focus on one aspect in particular: social media algorithms. These content recommender systems provide users with content deemed ‘relevant’ to them but can be manipulated to spread false and harmful content. This chapter explores three core components of algorithmic disinformation online: amplification, reception and correction. These elements contain both unique and overlapping issues and in examining them individually, we can gain a better understanding of how disinformation spreads and the potential interventions required to mitigate its effects. Given the real-world harms that disinformation can cause, it is equally important to ground our understanding in real-world discussions of the topic. In an analysis of Twitter discussions of the term ‘disinformation’ and associated concepts, results show that while disinformation is treated as a serious issue that needs to be stopped, discussions of algorithms are underrepresented. These findings have implications for how we respond to security threats such as a disinformation and highlight the importance of aligning policy and interventions with the public’s understanding of disinformation.
While peacekeeping operations have always been heavily dependent on host-state support and international political backing, changes in the global geopolitical and technological landscapes have presented new forms of state interference intended to influence, undermine, and impair the activities of missions on the ground. Emerging parallel security actors, notably the Wagner Group, have cast themselves as directly or implicitly in competition with the security guarantee provided by peacekeepers, while the proliferation of mis- and disinformation and growing cybersecurity vulnerabilities present novel challenges for missions’ relationships with host states and populations, operational security, and the protection of staff and their local sources. Together, these trends undermine missions’ efforts to protect civilians, operate safely, and implement long-term political settlements. This essay analyzes these trends and the dilemmas they present for in-country UN officials attempting to induce respect for international norms and implement their mandates. It describes nascent strategies taken by missions to maintain their impartiality, communicate effectively, and maintain the trust of those they are charged with protecting, and highlights early good practices for monitoring and analyzing this new operation environment, for reporting on and promoting human rights, and for operating safely.
The spread of false and misleading information, hate speech, and harassment on WhatsApp has generated concern about elections, been implicated in ethnic violence, and been linked to other disastrous events across the globe. On WhatsApp, we see the activation of what is known as the phenomenon of hidden virality, which characterizes how unvetted, insular discourse on encrypted, private platforms takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this book chapter, we discuss what factors contribute to the activation of hidden virality on WhatsApp while answering the following questions: 1) To what extent and how do WhatsApp’s sociotechnical affordances encourage the sharing of mis- and disinformation on the platform, and 2) How do WhatsApp’s users perceive and deal with mis- and disinformation daily? Our findings indicate that WhatsApp’s affordance of perceived privacy actively encourages the spread of false and offensive content on the platform, especially when combined with it being impossible for users to report inappropriate content anonymously. Groups in which such content is prominent are tightly controlled by administrators who typically hold dominant cultural positions (e.g., they are senior and male). Users who feel hurt by false and offensive content need to personally ask administrators for its removal. But this is not an easy job, as it requires users to challenge dominant cultural norms, causing them stress and anxiety. Users would rather have WhatsApp take on the burden of moderating problematic content. We close the chapter by situating our findings in relation to cultural and economic power dynamics. We bring attention to the fact that if WhatsApp does not start to take action to reduce and prevent the real-world harm of hidden virality, its affordances of widespread accessibility and encryption will keep promoting its market advantages, leaving the burden of moderating content to fall on minoritized users.
There are two main ways Russian propaganda reaches Japan: (a) the social media accounts of official institutions, such as the Russian Embassy, or Russian state-linked media outlets, such as Sputnik, and (b) pro-Russian Japanese political actors who willingly (or unwillingly) spread disinformation and display a clear pro-Kremlin bias. These actors justify the Russian invasion of Ukraine and repeat the Russian view of the war with various objectives in mind, primarily serving their own interests. By utilizing corpus analysis and qualitative examination of social media data, this article explores how Russian propaganda and a pro-Russian stance are effectively connected with and incorporated into the discursive strategies of political actors of the Japanese Far-Right.
Democratic backsliding, the slow erosion of institutions, processes, and norms, has become more pronounced in many nations. Most scholars point to the role of parties, leaders, and institutional changes, along with the pursuit of voters through what Daniel Ziblatt has characterized as alliances with more extremist party surrogate organizations. Although insightful, the institutionalist literature offers little reflection about the growing role of social technologies in organizing and mobilizing extremist networks in ways that present many challenges to traditional party gatekeeping, institutional integrity, and other democratic principles. We present a more integrated framework that explains how digitally networked publics interact with more traditional party surrogates and electoral processes to bring once-scattered extremist factions into conservative parties. When increasingly reactionary parties gain power, they may push both institutions and communication processes in illiberal directions. We develop a model of communication as networked organization to explain how Donald Trump and the Make America Great Again (MAGA) movement rapidly transformed the Republican Party in the United States, and we point to parallel developments in other nations.