To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
This chapter presents a discussion about the interconnection between the proliferation of sources of information in a “post-truth” era. In particular, it considers the question of what the concept of “post-truth” actually means in the context of prevailing understandings of veracity and sincerity in discourse and communication. It also places this notion against the broader discursive practice of (de)legitimization and how the digital environment has added layers of complexity to how users – citizens – negotiate information and the idea of truth. In particular, attention is given to how mis- and disinformation in a post-truth context can be proliferated and disseminated in the online context and the specific features of communication the users might utilize to do so. Overall, this chapter explores current understandings of the notion of post-truth in public discourse before focusing more explicitly on how it is used in public discourse by influential actors such as Donald Trump. It will also consider the role that post-truth discourse plays in populist discourse as well the issues posed in broader online communication in the virtual context.
In contrast to conservatives, progressives argue that platforms don’t block enough content. In particular, progressive critics point to the prevalence of allegedly harmful content on social media platforms, including politically manipulative content, mis- and disinformation (especially about medical issues), harassment and doxing, and hate speech. They argue that social media algorithms actively promote such content to increase engagement, resulting in many forms of social harm including greater political polarization. And they argue (along with conservatives) that social media platforms have been especially guilty of permitting materials harmful to children to remain accessible. As with conservative attacks however, the progressive war on social media is rife with exaggerations and rests on shaky empirical grounds. In particular, there is very little proof that that platform algorithms increase political polarization, or even proof that social media harms children. Moreover, while not all progressive attacks on social media lack a foundation, they are all rooted in an entirely unrealistic expectation that perfect content moderation is possible.
This paper examines the effectiveness of media literacy interventions in countering misinformation among in-transit migrants in Mexico and Colombia. We conducted experiments to assess whether well-known strategies for fighting misinformation are effective for this understudied yet particularly vulnerable population. We evaluate the impact of digital media literacy tips on migrants’ ability to identify false information and their intentions to share migration-related content. We find that these interventions can effectively decrease migrants’ intentions to share misinformation. We also find suggestive evidence that asking participants to consider accuracy may inadvertently influence their sharing behavior by acting as a behavioral nudge, rather than simply eliciting their sharing intentions. Additionally, the interventions reduced trust in social media as an information source while maintaining trust in official channels. The findings suggest that incorporating digital literacy tips into official websites could be a cost-effective strategy to reduce misinformation circulation among migrant populations.
With formal international organizations (IOs) facing gridlock and informal IOs proliferating, cooperation in the twenty-first century looks different than it did in previous eras. Global governance institutions today also face additional challenges, including a fragmented information environment where publics are increasingly vulnerable to misinformation and disinformation. What do these trends portend for international politics? One way to answer this question is to return to a core ingredient of a well-functioning IO—information provision—and ask how such changes affect efficiency. Viewed through this lens, we see decline in some arenas and adaptation in others. Formal IOs are struggling to retain relevance as their weak policy responses and ambiguous rules create space for competing signals. The proliferation of informal institutions, on the other hand, may represent global governance evolution, as these technocratic bodies are often well-insulated from many political challenges. Yet even if global governance retains functionality, the legitimacy implications of such trends are troubling. IO legitimacy depends in part on process, and from this standpoint, the informational gains of informal governance must be weighed against losses of accountability and transparency. Ultimately, evaluating the normative implications of these trends requires making judgments about the preferred legitimizing principles for global governance.
Chapter Four contends that the electronic amplification of false and misleading election-related claims poses a significant threat to American democracy. To address that threat, we urgently need government regulation of companies that provide electronic amplification services. However, the Supreme Court has created a body of First Amendment doctrine that places Congress in a constitutional straightjacket, making it almost impossible for Congress to enact the type of legislation that is urgently needed to protect our democracy. This chapter sketches the outlines of a proposed federal statute that would restrict the electronic amplification of election-related misinformation. It explains why any statute along those lines – indeed, any statute that might be moderately effective in protecting American democracy from the threat posed by the electronic amplification of misinformation – would almost certainly be deemed unconstitutional under the Court’s current First Amendment doctrine. Therefore, the Court must revise its First Amendment doctrine to help save American democracy.
Misinformation has emerged as a key threat worldwide, with scholars frequently highlighting the role of partisan motivated reasoning in misinformation belief. Yet the mechanisms enabling the endorsement of misinformation may differ in contexts where other identities are salient. This study explores whether religion drives the endorsement of misinformation in India. Using original data, we first show that individuals with high levels of religiosity and religious polarization endorse significantly higher levels of misinformation. Next, to understand the causal mechanisms through which religion operates, we field an experiment where corrections rely on religious messaging, and/or manipulate perceptions of religious ingroup identity. We find that corrections including religious frames (1) reduce the endorsement of misinformation; (2) are sometimes more effective than standard corrections; and (3) work beyond the specific story corrected. These findings highlight the religious roots of belief formation and provide hope that social identities can be marshalled to counter misinformation.
A remarkable shift in climate change misinformation has taken over social media streams. The conversation is no longer totally absorbed with denying that climate change exists. Instead, the ‘New Denial’ is bent on condemning solutions to climate change and their supporters. Our study meticulously analyzed this shift, using extensive methods to untangle the content of over 200,000 Tweets from 2021 to 2023. We found that the New Denial is a heated political debate that often calls up common far-right arguments, falsely accuses climate solutions as ineffective and risky, and attacks climate solution supporters.
Technical summary
Over the past five years, a ‘New Denial’ has emerged in regards to climate change misinformation on social media. This shift marks a transition of the dominance of rhetoric centered around denial of climate change science to attacks that seek to undermine and cast doubt on proposed climate solutions and those who support them. While much of the academic literature to date has explored misinformation about climate science, there is a great need to explore this shift and seek out increased understanding of misinformation around climate change solutions specifically. In this paper, we employ a mixed-methods analysis, drawing on data from Twitter from 2021 to 2023, to analyze the content of climate solution misinformation. We find that the New Denial is frequently centered on politically-laden debates nestled in common narratives on the right, often attacking supporters of climate solutions as harbingering ulterior motives for climate solutions that are fundamentally flawed. We use these insights to reflect on targeted interventions for climate solution misinformation on social media.
Social media summary
A New Denial is sweeping social media, no longer bent on denying climate science. It's new target: climate solutions and the people pushing for them.
Historically, local newsgatherers played a key democracy-enhancing role by keeping their communities informed about local events and holding local elected officials to account. As the market for local news has evaporated, more and more cities have become “news deserts.” Meanwhile, fewer national legacy news providers can afford to invest in the processes and expertise needed to produce high-quality news about our increasingly complex world. The true crisis of press legitimacy is the declining cultural investment in the systematic gathering of high-quality news produced by independent, transparent, and trustworthy sources.
Although scholars usually point to a handful of cultural and economic factors as undermining news quality and press credibility, various critics now identify a more covert culprit: the US Supreme Court. The Court is partly to blame for the press’s declining credibility, these critics claim, because the Court’s First Amendment decisions hinder the ability of state defamation law to hold the press accountable for defamatory falsehoods. The implication is that the press would regain much of its credibility if the Court would remove these constitutional barriers – especially the requirement that public officials and public figures demonstrate “actual malice” on the part of the press for a defamation claim to prevail. Nonetheless, as this chapter explains, the current landscape of high-profile defamation cases, and the public reaction to them, casts doubt on whether things could be so easy.
After the first 24–48 hours of a health emergency, the health emergency enters the maintenance phase. During the maintenance phase health officials provide maintenance messages that contain deeper risk explanations, promote interventions, continue to make commitments to the community, and address rumors and misinformation. Health emergencies often spend a lot of time in the maintenance phase, so it is imperative that emergency risk communicators provide clear, coordinated, and consistent messages about the health risks. By communicating credible, accurate, and actionable health information, a health agency can demonstrate the Crisis and Emergency Risk Communication (CERC) principles of Be First, Be Right, Be Credible, Show Respect, Express Empathy, and Promote Action. The chapter provides practical steps on how to write maintenance messages and provides quick response communication planning and implementation steps such as identifying communication objectives, audiences, key messages, and channels and developing communication products/materials. This chapter also includes key tips related to spokespeople, partner agencies, and call centers to ensure message consistency is achieved during the response. The rumor management framework is highlighted. A student case study analyzes the Mpox outbreak in Louisiana using the CERC framework. Reflection questions are included at the end of the chapter.
Neil Levy’s book Bad Beliefs defends a prima facie attractive approach to social epistemic policy – namely, an environmental approach, which prioritises the curation of a truth-conducive information environment above the inculcation of individual criti cal thinking abilities and epistemic virtues. However, Levy’s defence of this approach is grounded in a surprising and provocative claim about the rationality of deference. His claim is that it’s rational for people to unquestioningly defer to putative authorities, because these authorities hold expert status. As friends of the environmental approach, we try to show why it will be better for that approach to not be argumentatively grounded in this revisionist claim about when and why deference is rational. We identify both theoretical and practical problems that this claim gives rise to.
While peacekeeping operations have always been heavily dependent on host-state support and international political backing, changes in the global geopolitical and technological landscapes have presented new forms of state interference intended to influence, undermine, and impair the activities of missions on the ground. Emerging parallel security actors, notably the Wagner Group, have cast themselves as directly or implicitly in competition with the security guarantee provided by peacekeepers, while the proliferation of mis- and disinformation and growing cybersecurity vulnerabilities present novel challenges for missions’ relationships with host states and populations, operational security, and the protection of staff and their local sources. Together, these trends undermine missions’ efforts to protect civilians, operate safely, and implement long-term political settlements. This essay analyzes these trends and the dilemmas they present for in-country UN officials attempting to induce respect for international norms and implement their mandates. It describes nascent strategies taken by missions to maintain their impartiality, communicate effectively, and maintain the trust of those they are charged with protecting, and highlights early good practices for monitoring and analyzing this new operation environment, for reporting on and promoting human rights, and for operating safely.
Behind the black boxes of algorithms promoting or adding friction to posts, technical design decisions made to affect behavior, and institutions stood up to make decisions about content online, it can be easy to lose track of the heteromation involved, the humans spreading disinformation and, on the other side, moderating or choosing not to moderate it. This can be aptly shown in the case of the spread of misinformation on WhatsApp during Brazil’s 2018 general elections. Since WhatsApp runs on a peer-to-peer architecture, there was no algorithm curating content according to the characteristics or demographics of the users, which is how filter bubbles work on Facebook. Instead, a human infrastructure was assembled to create a pro-Bolsonaro environment on WhatsApp and spread misinformation to bolster his candidacy. In this paper, we articulate the labor executed by the human infrastructure of misinformation as hetoromation.
The spread of false and misleading information, hate speech, and harassment on WhatsApp has generated concern about elections, been implicated in ethnic violence, and been linked to other disastrous events across the globe. On WhatsApp, we see the activation of what is known as the phenomenon of hidden virality, which characterizes how unvetted, insular discourse on encrypted, private platforms takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this book chapter, we discuss what factors contribute to the activation of hidden virality on WhatsApp while answering the following questions: 1) To what extent and how do WhatsApp’s sociotechnical affordances encourage the sharing of mis- and disinformation on the platform, and 2) How do WhatsApp’s users perceive and deal with mis- and disinformation daily? Our findings indicate that WhatsApp’s affordance of perceived privacy actively encourages the spread of false and offensive content on the platform, especially when combined with it being impossible for users to report inappropriate content anonymously. Groups in which such content is prominent are tightly controlled by administrators who typically hold dominant cultural positions (e.g., they are senior and male). Users who feel hurt by false and offensive content need to personally ask administrators for its removal. But this is not an easy job, as it requires users to challenge dominant cultural norms, causing them stress and anxiety. Users would rather have WhatsApp take on the burden of moderating problematic content. We close the chapter by situating our findings in relation to cultural and economic power dynamics. We bring attention to the fact that if WhatsApp does not start to take action to reduce and prevent the real-world harm of hidden virality, its affordances of widespread accessibility and encryption will keep promoting its market advantages, leaving the burden of moderating content to fall on minoritized users.
This chapter focuses on how it is possible to develop and retain false beliefs even when the relevant information we receive is not itself misleading or inaccurate. In common usage, the term misinformed refers to someone who holds false beliefs, and the most obvious source of false beliefs is inaccurate information. In some cases, however, false beliefs arise, not from inaccurate or misleading information, but rather from cognitive biases that influence the way that information is interpreted and recalled. Other cognitive biases limit the ability of new and accurate information to correct existing misconceptions. We begin the chapter by examining the role of cognitive biases and heuristics in creating misconceptions, taking as our context misconceptions commonly observed during the COVID-19 pandemic. We then explain why accurate information does not always or necessarily correct misconceptions, and in certain situations can even entrench false beliefs. Throughout the chapter, we outline strategies that information designers can use to reduce the possibility that false beliefs arise from, and persist in the face of, accurate information.
The time lag between when research is completed and when it is used in clinical practice can be as long as two decades. This chapter considers the dissemination and implementation of research findings. It also explores better ways to make research findings understood and used. On the one hand, we recognize the need to get new research into practice as soon as possible. On the other hand, we challenge the trend toward rapid implementation. When results are put into practice prematurely, patients may suffer unnecessary consequences of insufficiently evaluated interventions. We offer several examples of Nobel Prize winning interventions that had unintentional harmful effects that were unknown when the prize was awarded. To address these problems, we support the need for greater transparency in reporting studies results, open access to clinical research data, and the application of statistical tools such as forest plots and funnel plots that might reveal data irregularities.
Reading or writing online user-reviews of places like a restaurant or a hair salon is a common information practice. Through its Local Guides Platform, Google calls on users to add reviews of places directly to Google Maps, as well as edit store hours and report fake reviews. Based on a case study of the platform, this chapter examines the governance structures that delineate the role Local Guides play in regulating the Google Maps information ecosystem and how it frames useful information vs. bad information. We track how the Local Guides Platform constructs a community of insiders who make Google Maps better vs. the misinformation that the platform positions as an exterior threat infiltrating Google Maps universally beneficial global mapping project. Framing our analysis through Kuo and Marwick’s critique of the dominant misinformation paradigm, one often based on hegemonic ideals of truth and authenticity. We argue that review and moderation practices on Local Guides further standardize constructions of misinformation as the product of a small group of outlier bad actors in an otherwise convivial information ecosystem. Instead, we consider how the platform’s governance of crowdsourced moderation, paired with Google Maps’ project of creating a single, universal map, helps to homogenize narratives of space that then further normalize the limited scope of Google’s misinformation paradigm.
Democratic backsliding, the slow erosion of institutions, processes, and norms, has become more pronounced in many nations. Most scholars point to the role of parties, leaders, and institutional changes, along with the pursuit of voters through what Daniel Ziblatt has characterized as alliances with more extremist party surrogate organizations. Although insightful, the institutionalist literature offers little reflection about the growing role of social technologies in organizing and mobilizing extremist networks in ways that present many challenges to traditional party gatekeeping, institutional integrity, and other democratic principles. We present a more integrated framework that explains how digitally networked publics interact with more traditional party surrogates and electoral processes to bring once-scattered extremist factions into conservative parties. When increasingly reactionary parties gain power, they may push both institutions and communication processes in illiberal directions. We develop a model of communication as networked organization to explain how Donald Trump and the Make America Great Again (MAGA) movement rapidly transformed the Republican Party in the United States, and we point to parallel developments in other nations.
Public opinion surveys are vital for informing democratic decision-making, but responding to rapidly changing information environments and measuring beliefs within hard-to-reach communities can be challenging for traditional survey methods. This paper introduces a crowdsourced adaptive survey methodology (CSAS) that unites advances in natural language processing and adaptive algorithms to produce surveys that evolve with participant input. The CSAS method converts open-ended text provided by participants into survey items and applies a multi-armed bandit algorithm to determine which questions should be prioritized in the survey. The method’s adaptive nature allows new survey questions to be explored and imposes minimal costs in survey length. Applications in the domains of misinformation, issue salience, and local politics showcase CSAS’s ability to identify topics that might otherwise escape the notice of survey researchers. I conclude by highlighting CSAS’s potential to bridge conceptual gaps between researchers and participants in survey research.
The study of dis/misinformation is currently in vogue, however with much ambiguity about what the problem precisely is, and much confusion about the key concepts that are brought to bear on this problem. My aim of this paper is twofold. First, I will attempt to precisify the (dis/mis)information problem, roughly construing it as anything that undermines the “epistemic aim of information.” Second, I will use this precisification to provide a new grounded account of dis/misinformation. To achieve the latter, I will critically engage with three of the more popular accounts of dis/misinformation which are (a) harm-based, (b) misleading-based, and (c) ignorance-based accounts. Each engagement will lead to further refinement of these key concepts, ultimately paving the way for my own account. Finally, I offer my own information hazard-based account, which distinguishes between misinformation as content, misinformation as activity, and disinformation as activity. By introducing this distinction between content and activity, it will be shown that my account is erected on firmer conceptual/ontological grounds, overcoming many of the difficulties that have plagued previous accounts, especially the problem of the proper place of intentionality in understanding dis/misinformation. This promises to add clarity to dis/misinformation research and to prove more useful in practice.