To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cyber disinformation is a global, very sophisticated phenomenon, capable of producing negative consequences on democratic values and institutions. This chapter argues that individual behavior of users plays a key role in the control of the phenomenon and aims to identify factors that impact on users’ behavioral intentions and cyber hygiene behavior. This chapter integrates the Extended Theory of Planned Behavior and a Structural Equation Model, realized through Partial Least Square – –Structural Equation Modeling, applied to the cyber disinformation phenomenon. The research data were collected using a questionnaire administered in Poland and Romania and analyzed using the Structural Equation Model. The model’s parameters were processed using the SmartPLS software. The reliability of the variables was assessed using Cronbach’s Alpha and Composite Reliability. The research revealed the applicability of the Theory of Planned Behavior model and found that Moral Norms and Perceived Behavioral Control have an impact on Behavioral Intention and Cyber Hygiene Behavior. The findings of this chapter can provide stakeholders with important insights that can lead to improved responses to the phenomenon.
In this chapter, we explore how Israel approaches its protection from cyber threats with a focus on disinformation. The chapter relies on primary source material in English and Hebrew and interviews with Israeli researchers and disinformation experts. This chapter outlines the overview of the disinformation threats Israel has been facing in the recent past and present, diagnoses the presence and absence in legislative policy concerning disinformation, and analyzes Israel’s private industry efforts to bolster cyber security defense. Finally, our conclusion considers a variety of overarching outlooks on the future of countering internal disinformation in Israel.
Rising to speak in the House of Commons in November 1947, Winston Churchill – by then no longer prime minister but still member of parliament, his party having been defeated in the general election of May 1945 – remarked that “No one pretends that democracy is perfect … Indeed, it has been said that democracy is the worst form of Government except for all those other forms that have been tried.” Churchill felt especially convinced that it was superior to those varieties of governance that relied upon “a group of super men and super-planners … ‘playing angel’ … and making the masses of the people do what they think is good for them, without any check or correction.” The following year, the Universal Declaration of Human Rights was signed. While the term democracy is not mentioned, its essence is enshrined in the document, signed by democracies and autocracies alike: “The will of the people shall be the basis of the authority of government; this will shall be expressed in periodic and genuine elections which shall be by universal and equal suffrage and shall be held by secret vote or by equivalent free voting procedures.”
The existence of democratic systems of government threatens the legitimacy of authoritarian regimes. Democracy presents unique opportunities and vulnerabilities, including public debate and free expression, which nefarious actors can exploit by spreading false information. Disinformation can propagate rapidly across social networks and further authoritarian efforts to weaken democracy. This research discusses how Russia and China leverage online disinformation across contexts and exploit democracies’ vulnerabilities to further their goals. We create an analytical framework to map authoritarian influence efforts against democracies: (i) through longer term, ambient disinformation, (ii) during transitions of political power, and (iii) during social and cultural divides. We apply this framework to case studies involving Western democracies and neighboring states of strategic importance. We argue that both China and Russia aim to undermine faith in democratic processes; however, they bring different histories, priorities, and strategies while also learning from each other and leveraging evolving technologies. A primary difference between the countries’ disinformation against democracies is their approach. Russia builds on its longstanding history of propaganda for a more direct, manipulation-driven approach, and China invested heavily in technological innovation more recently for a permeating censorship-driven approach. Acknowledging it is impossible to know disinformation’s full scope and impact given the current information landscape, the growing international ambition and disinformation efforts leveraged by authoritarian regimes are credible threats to democracy globally. For democracies to stay healthy and competitive, their policies and safeguards must champion the free flow of trustworthy information. Resilience against foreign online disinformation is vital to achieving fewer societal divides and a flourishing information environment for democracies during peaceful – and vulnerable – times.
How do the dual trends of increased misinformation in politics and increased socioeconomic inequality contribute to an erosion of trust and confidence in democratic institutions? In an era of massive misinformation, voters bear the burden of separating truth from lies as they determine how they stand on important issue areas and which candidates to support. When candidates engage in misinformation, it uncouples the already weak link among vote intentions, candidate choice, and policy outputs. At the same time, high levels of economic inequality and social stratification may contribute to lower levels of institutional trust, and the correspondingly more insular socioeconomic groups may experience misinformation differently. Social policy, as a policy area intentionally designed to alleviate risk and redistribute resources, thus becomes a special case where the effects of misinformation and socioeconomic inequality may be crosscutting and heightened.
Information is a key variable in International Relations, underpinning theories of foreign policy, inter-state cooperation, and civil and international conflict. Yet IR scholars have only begun to grapple with the consequences of recent shifts in the global information environment. We argue that information disorder—a media environment with low barriers to content creation, rapid spread of false or misleading material, and algorithmic amplification of sensational and fragmented narratives—will reshape the practice and study of International Relations. We identify three major implications of information disorder on international politics. First, information disorder distorts how citizens access and evaluate political information, creating effects that are particularly destabilizing for democracies. Second, it damages international cooperation by eroding shared focal points and increasing incentives for noncompliance. Finally, information disorder shifts patterns of conflict by intensifying societal cleavages, enabling foreign influence, and eroding democratic advantages in crisis bargaining. We conclude by outlining an agenda for future research.
Iosif Stalin, along with Adolf Hitler and Mao Zedong, constituted the Big Three dictators of the twentieth century who decisively swayed the course of world history. As is the case with all tyrants, hubris was the underlining feature of Stalin’s rule. As a Marxist, he firmly believed in the inevitability of the demise of capitalism and the ultimate triumph of socialism. As a Bolshevik, he emphatically advanced his mission of spreading war and revolution abroad and defeating world imperialism once and for all. By means of disinformation, subversion, and camouflage, Stalin covertly and openly challenged the liberal world order dominated by Britain, France, and the United States. His defiance found common political ground with his nemesis Adolf Hitler, as seen in the infamous Molotov-Ribbentrop Pact (Nazi-Soviet Pact of Non-Aggression) in August 1939. Ultimately, however, Stalin’s hubris blinded him to Hitler’s cunning, resulting in the humiliating and devastating betrayal of June 1941 (Operation Barbarossa). It was also Stalin’s hubris, however, that drove the country to victory over Nazi German, at unimaginable human and material costs.
This article examines the growing tension between protections for political satirical expression under article ten of the European Convention on Human Rights (ECHR) and emerging European regulatory efforts to combat disinformation through content moderation in the Digital Services Act (DSA). In this study, the case law from the European Court of Human Rights (ECtHR) have provided insights into the scope and contours of protected political satire under the ECHR, showing that the Court’s jurisprudence indicates strong protection for political satire. Meanwhile, in the social media landscape today, satire is merging with disinformation, not least around elections, potentially creating a “loophole” to spread malicious disinformation disguised as satire. Both human and automated moderation systems can easily misclassify such content, and the risks for over-removal as well as under-removal are imminent. This article therefore argues that while protection of satire and parody is essential in a democratic society, the use of it for malicious purposes speaks to the need for regulatory clarifications on how to conduct efficient content moderation to avoid over-moderation and a chilling effect on political satire, while still identifying and mitigating risks under the DSA.
This chapter examines the phenomenon of disinformation in the digital era and its implications for freedom of expression. It explores how the rapid dissemination of false, manipulated, and misleading information – termed a ‘disinfodemic’ – poses threats to human rights, democracy, and public trust. The chapter outlines the historical roots of disinformation, the technological factors that enable it, and the responses by public and private actors to mitigate its harmful effects. The chapter differentiates between disinformation (intentional), misinformation (unintentional), and malinformation (genuine information used to harm), while highlighting their diverse forms, such as fake news, deepfakes, and conspiracy theories. Disinformation erodes public trust, affects electoral integrity, threatens public health, and harms individuals’ rights to information and privacy. The chapter emphasises the necessity of finding a balance between combating disinformation and preserving freedom of expression.
Since the Internet’s mainstream inception in the mid-1990s, the global telecommunications network has transformed from one that offered egalitarian promise to a network that often compromises democratic norms. And, as its conceptual linchpin, Internet “openness” provided the potential for technological innovation that could revolutionize both communication and commerce. The regulatory schemes introduced in the early Internet era thus sought to advance openness and innovation in the fledging online world. But, while the times and technologies have since changed, the regulatory frameworks have largely remained the same. Accordingly, this review essay examines the Internet’s regulatory and cultural history to explore how the open values of the information age gave way to our current era of online disinformation. To do so, I reflect upon two early studies of the digital realm that have advanced discourse and scholarship on Internet openness: Lawrence Lessig’s Code: and Other Laws of Cyberspace, Version 2.0, and Christopher Kelty’s Two Bits: The Cultural Significance of Free Software. Informed by these works, along with related digital scholarship, this essay argues that remembering the history of Internet openness reveals how the free access ideals of the Internet’s foundational age have been transformed by the renewed proprietary conventions of our current disinformation era.
As digital technologies transform governance, communication, and public life, human rights frameworks must adapt to new challenges and opportunities. This book explores four fundamental questions: how digitalisation changes the application of human rights, how human rights law can respond to the challenges of digital technology, how freedom of expression applies online, and how vulnerable groups are affected by digitalisation. With contributions from leading scholars, the book combines legal analysis with insights from ethics, environmental education, and medical research. It examines critical topics such as AI regulation, platform accountability, privacy protections, and disinformation, offering an interdisciplinary and international perspective. By balancing different viewpoints, this book helps readers navigate the complexities of human rights in the digital age. It is an essential resource for anyone seeking to understand and shape the evolving landscape of digital rights and governance. This title is also available as open access on Cambridge Core.
This scoping review investigates the complex landscape of fake news research, focusing on its link with attitudinal polarization and identifying key themes in the literature. Our objectives included mapping the main themes in fake news literature, analyzing how these themes connect, examining how polarization is conceptualized across studies, and how fake news and attitudinal polarization are related. Through an extensive theme analysis of fake news research sourced from SCOPUS and Web of Science databases, we identified four major thematic areas: (1) the influence of technologies and platforms on fake news, (2) user engagement and behavioral responses to fake news, (3) fake news characteristics and their social consequences, and (4) strategies for fake news detection and countermeasures. In-depth analysis of 20 selected peer-reviewed papers revealed significant inconsistencies in the operationalization of both fake news and polarization and in the definitions of polarization. Regarding evidence on fake news’ influence on polarization, mixed results are found, with some studies indicating attitude reinforcement, while others find negligible effects. This scoping review highlights the need for standardized methodologies to clarify fake news’ role in attitudinal polarization and societal division, calling for a unified framework in fake news and polarization research to advance understanding of fake news’ societal impact.
This article argues that the environmental contexts of memory are vulnerable to Artificial Intelligence (AI)-generated distortions. By addressing the broader ecological implications for AI’s integration into society, this article looks beyond a sociotechnical dimension to explore the potential for AI to complicate environmental memory and its role in shaping human–environment relations. First, I address how the manipulation and falsification of memory risks undermining intergenerational transmission of environmental knowledge. Second, I examine how AI-generated blurring of boundaries between real and unreal can lead to collective inaction on environmental challenges. By identifying memory’s central role in addressing environmental crisis, this article places emerging debates on memory in the AI era in direct conversation with environmental discourse and scholarship.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. The book carefully parses attacks against social media coming from both the political left and right to demonstrate how most of these critiques are overblown or without empirical support. The work analyzes regulations directed at social media in the United States and European Union, including efforts to amend Section 230 of the Communications Decency Act. It argues that many of these proposals not only raise serious free-speech concerns, but also likely have unintended and perverse public policy consequences. Killing the Messenger concludes by identifying specific regulations of social media that are justified by serious, demonstrated harms, and that can be implemented without jeopardizing the profoundly democratizing impact social media platforms have had on public discourse. This title is also available as open access on Cambridge Core.
With formal international organizations (IOs) facing gridlock and informal IOs proliferating, cooperation in the twenty-first century looks different than it did in previous eras. Global governance institutions today also face additional challenges, including a fragmented information environment where publics are increasingly vulnerable to misinformation and disinformation. What do these trends portend for international politics? One way to answer this question is to return to a core ingredient of a well-functioning IO—information provision—and ask how such changes affect efficiency. Viewed through this lens, we see decline in some arenas and adaptation in others. Formal IOs are struggling to retain relevance as their weak policy responses and ambiguous rules create space for competing signals. The proliferation of informal institutions, on the other hand, may represent global governance evolution, as these technocratic bodies are often well-insulated from many political challenges. Yet even if global governance retains functionality, the legitimacy implications of such trends are troubling. IO legitimacy depends in part on process, and from this standpoint, the informational gains of informal governance must be weighed against losses of accountability and transparency. Ultimately, evaluating the normative implications of these trends requires making judgments about the preferred legitimizing principles for global governance.
The first goal of this chapter is to argue that the press as an institution is entitled to special solicitude under the First Amendment, not only because it is textually specified in the Constitution or because it serves important roles such as checking public and private power, but because it can contribute to the marketplace of ideas in ways that a healthy democracy needs. In other words, the press as an institution can provide an important link between the First Amendment’s epistemic and democratic values. The chapter’s second goal is to provide a rough and preliminary sketch of the relationship between press freedom, violence, and public discourse. Some elements seem straightforward enough. Violence and harassment obstruct the press’s function, including its traditional role in constituting and shaping public discourse. Distrust, disinformation, violence, and press degradation exist in a mutually reinforcing ecosystem. And even as violence shapes the media, the media shapes the social conditions, understandings, and practice of violence in return. Journalism, albeit in different ways than legal interpretation, “takes place on a field of pain and death,” to repurpose Robert Cover’s famous phrase – not only in describing it but in making it real. This, it should go without saying, is no excuse for violence against media members. The point is, rather, that a healthy press can be a bulwark not only for knowledge and democracy but against the kinds of private and public violence that threaten both.
The chapter explores the impact of new technologies on liberal democracy, highlighting both positive and negative dimensions. E-governance, facilitated by information and communication technologies, improves efficiency, transparency, and accountability, reducing the need for physical government visits. AI-supported online voting enhances participation and prevents fraud, while social media and online communities can foster social capital. However, challenges arise as well. Social media algorithms can manipulate information, affecting public opinion, and tech giants’ dominance may influence democratic participation. Increased reliance on digital systems exposes governments to cybersecurity threats, undermining public confidence. Inequality in internet access disenfranchises those without it, leading to voter suppression and declining trust. Algorithms contribute to polarization and filter bubbles, with deepfakes impacting political discourse. In totalitarian contexts, technology aids activism against authoritarian regimes through anonymous communication and encryption. The chapter concludes by advocating strategies for maximizing benefits and minimizing harm, emphasizing digital literacy, citizen education, data privacy regulations, responsible technology use, community empowerment, activism, awareness, and accountability for ethical use by governments and tech companies. Recognizing the importance of both physical and digital connections is crucial for thriving liberal democracies.
Disinformation is a growing epistemic threat, yet its connection to understanding remains underexplored. In this paper, I argue that understanding – specifically, understanding how things work and why they work the way they do – can, all else being equal, shield individuals from disinformation campaigns. Conversely, a lack of such understanding makes one particularly vulnerable. Drawing on Simion’s (2023) characterization of disinformation as content that has a disposition to generate or increase ignorance, I propose that disinformation frequently exploits a preexisting lack of understanding. I consider an important objection – that since understanding is typically difficult to acquire, we might rely on deferring to experts. However, I argue that in epistemically polluted environments, where expertise is systematically mimicked, deference alone provides no reliable safeguard. I conclude by briefly reflecting on strategies for addressing these challenges, emphasizing both the need for promoting understanding and for cleaning up the epistemic environment.
The spread of disinformation, such as false and fabricated content, as amplified by the expansion of artificial intelligence systems, has captured the attention of policymakers on a global scale. However, addressing disinformation leads constitutional democracies towards questions about the scope of freedom of expression as the living core of a democratic society. If, on the one hand, this constitutional right has been considered a barrier to public authorities’ interferences to limit the circulation of disinformation, on the other hand, the spread of fabricated content and manipulative techniques, including deepfakes, has increasingly questioned liberal views. This constitutional challenge is further enriched by the role of online platforms which, by mediating speech in their online spaces, are essential tiles of a mosaic picturing the potential regulatory strategies and the limit of public enforcement to tackle disinformation. Within this framework, this work argues that the European constitutional approach to tackle disinformation has defined a unique model on a global scale. The European Union has developed a strategy that combines procedural safeguards, risk regulation, and co-regulation, as demonstrated by initiatives such as the Digital Services Act, the Strengthened Code of Practice on Disinformation, and the Artificial Intelligence Act. Positioned between liberal and illiberal models, the European approach proposes an alternative constitutional vision to address disinformation based on risk mitigation and the collaboration between public and private actors.