Hostname: page-component-54dcc4c588-2bdfx Total loading time: 0 Render date: 2025-10-02T10:05:54.332Z Has data issue: false hasContentIssue false

The Religious Roots of Belief in Misinformation: Experimental Evidence from India

Published online by Cambridge University Press:  18 August 2025

Simon Chauchard
Affiliation:
Department of Social Sciences, University Carlos III of Madrid & Instituto Carlos III - Juan March, Getafe, Spain
Sumitra Badrinathan*
Affiliation:
Department of Politics, Governance, and Economics, American University, Washington, DC, USA
*
Corresponding author: Sumitra Badrinathan; Email: sumitrab@american.edu
Rights & Permissions [Opens in a new window]

Abstract

Misinformation has emerged as a key threat worldwide, with scholars frequently highlighting the role of partisan motivated reasoning in misinformation belief. Yet the mechanisms enabling the endorsement of misinformation may differ in contexts where other identities are salient. This study explores whether religion drives the endorsement of misinformation in India. Using original data, we first show that individuals with high levels of religiosity and religious polarization endorse significantly higher levels of misinformation. Next, to understand the causal mechanisms through which religion operates, we field an experiment where corrections rely on religious messaging, and/or manipulate perceptions of religious ingroup identity. We find that corrections including religious frames (1) reduce the endorsement of misinformation; (2) are sometimes more effective than standard corrections; and (3) work beyond the specific story corrected. These findings highlight the religious roots of belief formation and provide hope that social identities can be marshalled to counter misinformation.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

Canonical works in political science recognize the role of religion as a prominent political force in society (Putnam Reference Putnam2000; Verba, Schlozman and Brady Reference Verba, Schlozman and Brady1995). Scholars point to religion’s influence on public policy (Grzymala-Busse Reference Grzymala-Busse2015), public opinion (Pepinsky, Liddle and Mujani Reference Pepinsky, Liddle and Mujani2018), and social cohesion (Nellis Reference Nellis2023), underscoring its potential to shape beliefs, identity, and behaviour. Simultaneously, the last decade has seen a proliferation of scholarly work focusing on understanding why people believe misinformation and ways to counter it (Wittenberg and Berinsky Reference Wittenberg and Berinsky2020; Ecker et al. Reference Ecker, Lewandowsky, Cook, Schmid, Fazio, Brashier, Kendeou, Vraga and Amazeen2022). However, work linking the two strands of research remains largely neglected. To explain the prevalence of misperceptions, misinformation scholars have frequently highlighted the pivotal role of partisan motivated reasoning (Flynn, Nyhan and Reifler Reference Flynn, Nyhan and Reifler2017). Yet, in much of the world outside of Western democracies, religion and ethnicity significantly shape beliefs and preferences, with religious divisions influencing electoral outcomes, political participation, and other behaviours (Sircar Reference Sircar2022; McClendon and Riedl Reference McClendon and Riedl2019; Smith Reference Smith2019). Religion, both independently of partisanship and as a potential driver of it, may therefore also influence belief in misinformation.

How, if at all, does religion shape the endorsement of misinformation? We define religion as (1) adherence to a set of moral principles and (2) membership in a religiously defined identity category, and argue that religion may be connected to belief in misinformation for at least two key reasons. Adherence to longstanding religious moral principles may influence which beliefs are endorsed, while pressures to conform to religious group identities might drive the acceptance or rejection of misinformation. Building on this definition, we explore both descriptive and causal questions in this study. First, are religious beliefs and identities descriptively associated with the endorsement of misinformation? Given the scarcity of empirical evidence on the intersection of misinformation and religion, particularly outside Western contexts, establishing the existence of such a relationship is crucial. Second, if religion does influence misinformation endorsement, what mechanisms underlie this effect, and can these processes be harnessed to reduce vulnerability to misinformation? We answer these questions in the context of India, a country where religion has long been the basis for political mobilization and the formation of political parties (Chhibber and Verma Reference Chhibber and Verma2018; Brass Reference Brass2005). More recently, religious cleavages have resulted in riots as well as vigilante violence in the country, often fueled by misperceptions and rumours (Wilkinson Reference Wilkinson2006; Banaji and Bhat Reference Banaji and Bhat2019; Badrinathan, Chauchard and Siddiqui Reference Badrinathan, Chauchard and Siddiqui2025).

We rely on a combination of original descriptive data and experimental evidence, focusing on the COVID-19 pandemic, which saw a proliferation of medical misinformation and conspiracy theories (Motta, Stecula and Farhart Reference Motta, Stecula and Farhart2020; Brennen et al. Reference Brennen, Simon, Howard and Nielsen2020), alongside a catastrophic number of deaths in India. To answer our descriptive question, we employ a scale of Hindu religiosity with items measuring religious beliefs, practices, and norms, drawing on work by Verghese (Reference Verghese2020). We then show that belief in misinformation in India is strongly correlated with religiosity: those with higher levels of religiosity appear significantly more vulnerable to misinformation. Further, our evidence also suggests that the identity dimension of religion may be related to the endorsement of misinformation: in our sample, respondents who are more vulnerable to misinformation are also more likely to display affective polarization towards the religious outgroup.

Next, to understand the causal relationship between belief in misinformation and religion, we field an experiment. Building on our definition of religion as adherence to a set of principles and membership in an identity category, we explore how messaging emphasizing religious principles and religious ingroup norms affects endorsement of misinformation. We recruit a sample of Indian adults representative of the online population, thereby targeting those most often exposed to misinformation in the country (N=1600). Respondents are shown WhatsApp conversations with a misinformation stimulus, and in treatment conditions, a social correction to that misinformation by another user. We manipulate the content of this social correction, and in some treatments, additionally manipulate its source. In all treatment conditions, we test whether framing misinformation as morally problematic from a religious standpoint helps dispel falsehoods. To do so, we use original verses from ancient Hindu religious scriptures to back up corrections – these texts emphasize the importance of morality and truth. In a subset of treatment conditions, we additionally manipulate the religious identity of the group chat to signal a religious ingroup and test whether religious ingroup disapproval of misinformation further helps reduce its endorsement. We measure the effect of these treatments on the two types of popular falsehoods which circulated in India during and after the pandemic: conspiracy theories and medical misinformation.

Our results show that religiously-framed corrections are successful at shifting misinformed beliefs, in some cases outperforming standard corrections. But we also find that the efficacy of religious frames varies by type of misinformation. With regards to conspiracy theories, all religiously-framed treatments were successful at correcting misinformation, compared to a placebo control condition. Importantly, we show improvements in respondents’ ability to detect misinformation beyond the specific misinformation stimulus used in our treatments. Respondents are able to take cues from the treatment and accurately identify additional falsehoods. Next, we compare these treatments to a standard correction to evaluate whether corrective effects are due to the religious components of the treatment or simply to any corrective information. When compared to a standard social correction, including a religiously-framed moral message increases the effectiveness of corrections. Further, we demonstrate that only religious corrections significantly reduce endorsement of additional falsehoods beyond the corrected story. By contrast, for medical misinformation, a religiously-framed moral message alone fails to reduce endorsement of misinformation. However, combining it with a manipulation of group identity – and thus perceived group norms – does produce an effect (though this effect does not significantly improve upon a standard correction).

These findings have a number of implications for scholarship and policy. Most importantly, they confirm the argument that religious principles and identities drive the endorsement of misinformation. They also highlight the persistent nature of more deeply rooted misinformed beliefs. Recently viral (and thereby perhaps more salient) misinformation – such as conspiracy theories specifically about the pandemic, in this context – might be easier to correct: we find that more treatments are able to effectively attenuate these beliefs, even beyond a standard correction. However, deep-rooted beliefs which have existed since before COVID-19, such as medical misinformation relying on traditional belief systems, might be harder to dislodge, including when corrections invoke religion. Our experiment also builds on previous work on social corrections (Bode and Vraga Reference Bode and Vraga2018; Badrinathan and Chauchard Reference Badrinathan and Chauchard2023), and suggests that further attention to the role of religion and the mechanisms through which it operates in polarized systems is warranted in the misinformation literature. Our findings provide hope that both traditional belief systems and social identities can be marshalled to reduce vulnerability to misinformation.

Theoretical Expectations

Across cultures, religion fosters moral communities, shared values, and social connection. However, scholars of the psychology of religion have long argued that the cohesion and trust within religious communities may come at the cost of rationality (Haidt Reference Haidt2012). This group embeddedness can amplify the endorsement of false beliefs and flawed reasoning, suggesting that religiously motivated reasoning may drive misinformation belief, particularly among the highly religious. This study examines this premise in the context of India – a critical case given its population, comprising one in five people globally and nearly half of those in developing countries.

The Indian Context

Indian politics has long been dominated by a fundamental cleavage between Hindus and Muslims, and the prominence of religion as a social identity has been central. It is the basis of political mobilization, nationalism, and the formation of religiously motivated political parties (Brass Reference Brass2005). In 2021, a Pew Research Center survey found that Hindus tend to link their religious identity to national identity: 81 per cent of Hindus said it was important to be Hindu to be truly Indian, while a significantly smaller proportion of respondents from other religious groups felt the same. More generally, religious divides in India have historically determined not only electoral results (Chandra Reference Chandra2007; Sircar Reference Sircar2022) but also patterns of violence and support for violence (Wilkinson Reference Wilkinson2006; Jha Reference Jha2013; Badrinathan, Chauchard and Siddiqui Reference Badrinathan, Chauchard and Siddiqui2025).

Key to understanding the prominence of religion as an identity in modern India is the Bharatiya Janata Party (BJP), which epitomizes the importance of religion, and specifically Hinduism, in popular discourse. The party frequently employs puritanical rhetoric and moral appeals (Jaffrelot Reference Jaffrelot2021), leveraging Hindu symbols and figures for political gains, resulting in narratives that sometimes rely on misinformation. Since 2014, some political leaders in India have promoted pseudoscientific remedies such as homeopathy and ayurveda, often citing their roots in traditional Hindu practices. For example, in March 2020, a Hindu religious group – supported by a member of parliament – organized a 200-person event advocating cow urine as a COVID-19 cure (Siddiqui Reference Siddiqui2020). Separately, efforts to define a national identity rooted in majoritarian values have at times been associated with conspiratorial misinformation targeting minorities, particularly Muslims (Jaffrelot Reference Jaffrelot2021). During the COVID-19 crisis, sources aligned with the ruling establishment were reported to have circulated narratives blaming minority groups for the spread of the virus (Yasir Reference Yasir2020). This misinformation is harmful: belief in miracle cures can lead to ignoring public health measures like social distancing (Bridgman et al. Reference Bridgman, Merkley, Loewen, Owen, Ruths, Teichmann and Zhilin2020), while scapegoating minorities exacerbates polarization and violence (Banaji and Bhat Reference Banaji and Bhat2019). These examples highlight how conspiracy theories and medical misinformation often invoke religious beliefs and identities, both directly and indirectly.

Because much of this misinformation circulates on encrypted platforms like WhatsApp, where the source of a message cannot be traced, its suppliers and creators often remain unidentified. However, evidence suggests that right-wing political and religious figures in India play a central role in making misinformation more salient (Perrigo Reference Perrigo2019; Singh Reference Singh2019). While the intentions behind spreading such content are hard to determine, landmark studies on rumours in South Asia (Brass Reference Brass1997; Wilkinson Reference Wilkinson2006) suggest that anti-minority claims are often disseminated intentionally to either entrench religious divides through threats of violence or deepen Hindu sentiment by framing India – a diverse and constitutionally secular nation – as primarily a Hindu country (Baishya Reference Baishya2022). Observers note that misinformation spikes around elections (Klepper and Pathi Reference Klepper and Pathi2024), with ‘ethnic entrepreneurs’ often using religion to spread unverified rumours that fuel violence for electoral gain (Wilkinson Reference Wilkinson2006; Sircar Reference Sircar2022). Social media users who believe or share such stories are likely motivated by alignment with their religious beliefs or perceptions of majority norms (Davies Reference Davies2020).

In sum, both India’s longstanding religious divides and current religious nationalist fervour underscore the possibility of a fundamental association between religion and misinformation in India (Mishra Reference Mishra2021). However, empirical scholarship to date has yet to test whether such an association exists.Footnote 1 A well-established finding in the literature on American political behaviour is that motivated reasoning affects how individuals process information (Flynn, Nyhan and Reifler Reference Flynn, Nyhan and Reifler2017). With misinformation in particular, scholars underscore the importance of partisanship as the basis for motivated reasoning: even when misinformation is corrected, we are more likely to believe it if it aligns with our partisan priors. Evidence on the role of partisanship as a pivotal identity in India, however, is mixed. India’s party system is not historically viewed as ideologically structured: parties are not institutionalized (Chhibber, Jensenius and Suryanarayan Reference Chhibber, Jensenius and Suryanarayan2014), elections are highly volatile (Heath Reference Heath2005), and the party system itself is not ideological (Chandra Reference Chandra2007; Kitschelt and Wilkinson Reference Kitschelt and Wilkinson2007), at least not in a traditional sense (Chhibber and Verma Reference Chhibber and Verma2018). The recent nature of the BJP’s appeals, combined with the historical importance of religion in India, gives credence to the idea that it is not only partisanship, but perhaps also religion, that might drive belief in misinformation.

Given this intuition and findings from previous literature about the role of religiosity in promoting belief in non-rational explanations (Haidt Reference Haidt2012), our descriptive hypothesis predicts that individuals who are highly religious are more likely to endorse misinformation (Hypothesis 1).

Mechanisms of Belief in Misinformation

To determine the causal pathways through which religion might impact belief in misinformation, we field an experiment. Since we cannot manipulate religious identity or belief directly, we manipulate whether messages drawing on explicitly religious principles or originating from religious ingroups affect misinformation endorsement. We do this in the context of a correction experiment by manipulating whether corrections to misinformation draw on religious messages or refer to religious identities. This allows us to test whether different types of religious frames can discourage belief in misinformation and thereby shed light on the religion-misinformation causal link.

In doing so, we build on a large literature on corrective interventions to combat misinformation. In Western contexts where misinformation spreads on public social media such as Facebook, solutions include providing fact-checks and labeling misinformation as false (Porter and Wood Reference Porter and Wood2021; Clayton et al. Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin and Morgan2019), inoculating users (Hameleers Reference Hameleers2020; Roozenbeek and van der Linden Reference Roozenbeek and van der Linden2019), and priming the concept of accuracy (Pennycook and Rand Reference Pennycook and Rand2019). However, in India, as in much of the developing world, information is largely spread through encrypted platforms such as WhatsApp (Gil de Zúñiga, Ardèvol-Abreu and Casero-Ripollés Reference Gil de Zúñiga, Ardèvol-Abreu and Casero-Ripollés2019; Valeriani and Vaccari Reference Valeriani and Vaccari2018). Consequently, platform-based interventions such as adding a false label are not applicable, and solutions to correct misinformation online must necessarily stem from users correcting each other (Vraga, Bode and Tully Reference Vraga, Bode and Tully2020; Bode and Vraga Reference Bode and Vraga2018; Badrinathan and Chauchard Reference Badrinathan and Chauchard2023). Accordingly, we focus on social corrections in this study and build on a small but growing literature highlighting the role of peers correcting each other in online settings (Heiss et al. Reference Heiss, Nanz, Knupfer, Engel and Matthes2023; Vijaykumar et al. Reference Vijaykumar, Rogerson, Jin and de Oliveira Costa2022; Kligler-Vilenchik Reference Kligler-Vilenchik2022).

Group identities, particularly those based on religion, are strong social cleavages in India, and the online environment of WhatsApp may intensify these divides. Users often join private group chats centred on political, religious, or social causes (Chauchard and Garimella Reference Chauchard and Kiran Garimella2022), and such groups are frequently divided along religious lines (Saha et al. Reference Saha, Mathew, Garimella and Mukherjee2021). The insular nature of these private chats can increase vulnerability to misinformation (Kalogeropoulos and Rossini Reference Kalogeropoulos and Rossini2023): WhatsApp’s intimacy fosters a sense of solidarity, making misinformation more likely to be trusted (Davies Reference Davies2020). Indeed, research shows that homophily in networks correlates with increased belief in misinformation (Acemoglu, Ozdaglar and Siderius Reference Acemoglu, Ozdaglar and Siderius2021). Our interview data underscores these intuitions. One respondent explained why she believed a piece of medical misinformation on a WhatsApp group, emphasizing the role of religion in information processing:

It is the right thing to do. Our Hindu religion teaches us that it is the right thing to do – and this is what it truly means for me to be a part of Hindu history and culture, and to pass it down to my children.

Other respondents highlighted group identity and ingroup norms as drivers of information sharing. One participant noted:

Sometimes even if I’m not sure if something is true or not, I don’t want to be the only person not sharing something on the group. So I find any message I think will be popular, I forward it to the [Hindu religious] group. Then if many people like it, I come to know it is true.

These examples show that adherence to religious principles can both drive the endorsement of misinformation and justify such beliefs. Additionally, conformity to religious ingroup norms can intensify pressures to share and endorse information. We conclude that if we are able to challenge these notions – that religion requires adhering to a fixed set of beliefs or that being a ‘good’ member of a religious ingroup entails certain ideas – we could help reduce the endorsement of misinformation.

With this reflection in mind, we design corrections that are meant to appeal to the same psychological traits that make people vulnerable to falsehoods to begin with (Nyhan Reference Nyhan2021). While recent evidence suggests that all types of information can persuade and motivated reasoning can often be overcome (Coppock Reference Coppock2023), we argue that value-based and identity-congruent treatments may be particularly effective in our context due to key differences. Much of the prior research on this topic, including Coppock (Reference Coppock2023), comes from Western settings, where corrections rarely backfire (Porter and Wood Reference Porter and Wood2019). However, limited evidence from India suggests that intensive treatments may fail to drive meaningful change or could worsen outcomes for individuals with strong social identities (Badrinathan Reference Badrinathan2021). This suggests that not all types of information may be equally persuasive in our context. Attwell and Freeman (Reference Attwell and Freeman2015) show, for example, that value-based treatments are more effective in Australia, aligning with other studies that highlight the impact of identity-congruent correction sources (Berinsky Reference Berinsky2017). Beyond misinformation, similar effects have been observed in other domains, such as religious appeals to promote conservation efforts in Jordan (Buccione Reference Buccione2023), religious appeals in Indonesia to improve debt repayment (Bursztyn et al. Reference Bursztyn, Fiorin, Gottlieb and Kanz2019), and even in the US context, where religious appeals increase support for refugees among the most religious (DeMora et al. Reference DeMora, Merolla, Newman and Zechmeister2024). These studies highlight the potential power of interventions rooted in morality, shared values and identity.

We first argue that religion may influence the endorsement of falsehoods because such misinformation can align with longstanding religious beliefs or principles, making its endorsement have moral value. In other words, religious individuals might accept misinformation to avoid cognitive dissonance (Taber and Lodge Reference Taber and Lodge2006). Building on this idea, all our corrective treatments aim to reduce respondents’ dissonance and the perceived moral pressure to embrace misinformation. In addition to morality, we also consider religion as an identity and the role of perceived ingroup preferences. Simply addressing cognitive dissonance may not be sufficient if individuals feel compelled to endorse a piece of information because others in their ingroup do. Indeed, expressing misinformed beliefs may be driven by perceived group norms: individuals may endorse misinformation because they believe others do, and fear of social alienation can increase pressure to conform (Kahan et al. Reference Kahan, Peters, Dawson and Slovic2017). WhatsApp group chats, often organized around social and political causes (Davies Reference Davies2020), can amplify these pressures by fostering unwritten norms that encourage conformity (Chadwick, Vaccari and Hall Reference Chadwick, Vaccari and Hall2023; Kalogeropoulos and Rossini Reference Kalogeropoulos and Rossini2023). For example, research shows that prejudices and hateful rhetoric are typically constrained by values and norms, but are expressed when the situation allows for justification (Crandall and Eshleman Reference Crandall and Eshleman2003). Thus, altering perceived group norms around a belief may reduce its endorsement. This aligns with recent calls from misinformation scholars to focus on changing norms as a strategy for building healthier online communities (Blair et al. Reference Blair, Gottlieb, Nyhan, Paler, Argote and Stainfield2023).Footnote 2

We thus posit that social corrections using religious content to alleviate cognitive dissonance will reduce misinformation endorsement relative to a control condition (Hypothesis 2a). As noted above, we hypothesize these effects because our religious treatment not only primes religious membership but also explicitly encourages moral behaviour. Additionally, we hypothesize that social corrections combining religious content with manipulations of perceived group norms will be effective in reducing misinformation endorsement compared to a control condition (Hypothesis 2b).Footnote 3 We also hypothesize that the effectiveness of religious corrections is a function of the strength of an individual’s religiosity. Specifically, highly religious respondents will be more likely to engage with and be influenced by a religious frame, so we expect the efficacy of corrections to increase with higher religiosity (Hypothesis 3). Additionally, we explore one pre-registered research question: to benchmark the effectiveness of religiously-framed corrections, we compare them to a standard social correction without a religious frame (RQ 1). This comparison helps us assess the relative efficacy of different correction types, not just in comparison to a control group.Footnote 4

Method and Design

To test these hypotheses, we collected original survey data in India (N = 1600) after the second wave of the COVID-19 pandemic in 2021. The first goal of our survey was to field an extensive module of attitudes and perceptions to descriptively evaluate the correlation between religious beliefs and misinformation. Key in our descriptive measures is an index of Hindu religiosity. We build on Verghese (Reference Verghese2020) in conceptualizing Hinduism as practice-centred, and consequently operationalize religiosity as a function of rites and rituals, including features of everyday life such as attire, food habits and adherence to norms. To measure religiosity, we constructed a scale of eight items with questions that measure the practice of the Hindu religion on a quotidian basis, including frequency of prayer, the need to consult an astrologist before fixing a wedding date, frequency of religious fasting, and others.Footnote 5 Next, our survey included a pre-registered experiment. In our experiment, respondents were randomly assigned to one of five conditions in a between-subjects design (see Figure 1), of which four were treatment conditions and the fifth was a placebo control condition.

Figure 1. Experimental Flow.

Treatment Conditions

In all conditions, respondents read fictional but realistic screenshots of conversations on WhatsApp. The screenshots displayed a conversation between two users in a private WhatsApp chat group. In all treatment conditions (the first four conditions in Figure 1), the first user posts a piece of misinformation. In response, the second user uses a variety of correction strategies corresponding to our different treatment groups. In the Religious Message treatment, the social correction of the second user relies on a religious frame. To craft this message, we found real quotes from ancient Hindu religious scriptures that discuss either the truth as an important virtue or the imperative not to slander. The user in the conversation who corrects misinformation posts a verse from these Hindu religious scriptures (the Bhagavad Gita or the Mahabharata) alongside Hindu religious iconography, which together exhort people to consider the truth.Footnote 6

This technique builds on prior work on the importance of issue framing, shown to be successful in using religious frames to shape responses to climate change and other polarizing issues (Goldberg et al. Reference Goldberg, Gustafson, Ballew, Rosenthal and Leiserowitz2019). It also builds on work emphasizing that unlikely sources are more effective, as when Democrats contradict Democrats or when Republicans endorse vaccines (Larsen et al. Reference Larsen, Hetherington, Greene, Ryan, Maxwell and Tadelis2023; Porter and Wood Reference Porter and Wood2019). False messages about miracle cures in India often exhort readers to believe in homespun remedies since they uphold sacred truths from religious scriptures (Sachdev Reference Sachdev2017). In our treatment, we leverage this frequent recourse to religion by demonstrating that religious sources themselves may emphasize restraint from slander and value the truth.

Next, our Message + Religious Group and Message + Partisan Group treatments test whether additionally relieving perceived pressures to conform to the ingroup can attenuate endorsement of misinformation. To manipulate ingroup membership, these WhatsApp groups signal the purpose and identity of the group: the name of the group chat is revealed so as to prime membership to an explicitly religious (Hindu) group or to a religious-partisan group (the BJP).Footnote 7 Concretely, these treatments involve a correction to misinformation, with the correcting user emphasizing the importance of verifying questionable information before posting. Importantly, the corrective treatment is additive: we build on the Religious Message by incorporating both the group norm and group name aspects in the treatment. Their aim is to measure whether religious messages alone can correct misinformation or if manipulating ingroup norms is also necessary. These treatments contribute to a growing body of research demonstrating that structured communication networks can significantly promote social learning, reducing partisan biases on contentious political issues (Becker, Brackbill and Centola Reference Becker, Brackbill and Centola2017; Vraga and Bode Reference Vraga and Bode2017). To address potential validity concerns, we recognize that Hindu ingroups in present-day India may often overlap with partisan (BJP) groups, and thus test the treatment with both identity labels.

Thus, all these treatments include a moral message about religion, with some also incorporating cues about group membership. Unlike other identity categories in India, such as ethnicity or caste, religion’s distinctiveness may lie in its moral dimension, alongside its shared group membership aspect. Thus, all three of our religious treatments emphasize morality, with some also addressing group membership, reflecting the idea that morality may be a defining feature of religion.

To test our hypotheses, we compare the effect of these treatments to both a standard correction and a placebo control. Our Standard Correction treatment provides a social correction without religious content or attempts to shift group norms. In this treatment, the correction is simple and direct: the second user states that the first user’s claim is incorrect. This condition helps isolate whether the observed corrective effects are due to religious messaging or merely exposure to any social correction. We also compare these conditions to a placebo control, where respondents read a WhatsApp conversation on a neutral topic like wildlife or sports, with no misinformation.Footnote 8

We repeat this experimental flow for two issue blocks, (1) conspiracy theories and (2) medical misinformation. We randomize both the block and statement order within each block. Thus, respondents see two successive conversations on WhatsApp, each followed by outcome measures pertaining to the topic of the conversation. They remain in the same randomized condition throughout the experiment. All treatment stimuli are available in Online Appendix B. We underscore here that our primary objective in this study is to influence (reduce) the expression of misinformed beliefs. Research shows the prevalence of expressive responding in surveys (Bullock et al. Reference Bullock, Gerber, Hill and Huber2015; Prior, Sood and Khanna Reference Prior, Sood and Khanna2015). Our treatments do not aim to teach citizens how to distinguish true from false; instead, they aim to shift thinking and norms around belief expression, thereby reducing misinformation endorsement.

Outcomes

We measure the effect of these treatments on the perceived accuracy of two sets of headlines: conspiracy theories and medical misinformation. Importantly, the headlines in our outcome measure include the specific piece of misinformation corrected in the treatment, as well as 3 additional misinformation headlines, along with true headlines. Thus, we are able to measure whether the treatment reduced belief in false headlines beyond the specific story corrected.Footnote 9

Relying on these data, our main outcome of interest, in line with our PAP as well as previous research in this context (Badrinathan Reference Badrinathan2021; Badrinathan and Chauchard Reference Badrinathan and Chauchard2023), is a count of respondents’ ability to correctly identify true and false stories.Footnote 10 Importantly, because we measure respondents’ endorsement of the claim that was discussed in the treatment, as well as their endorsement of other claims, we are additionally able to evaluate whether each correction’s effect extends beyond the specific story corrected in the treatment. The list of headlines that comprise this measure, as well as the rationale for their selection, is available in Appendix C and as part of Figure 2 below.

Figure 2. Belief in Misinformation in our Sample.

Sample Characteristics

We recruited 1600 adult respondents in India through an online panel maintained by an online polling firm, Internet Research Bureau (IRB). Respondents were selected to be as representative as possible of the Indian adult population by age, gender and region. As with most online panels in India, while our sample is not representative of the entire Indian population, it is representative of the subset that has Internet access, which is skewed towards educated, wealthy, pro-BJP and upper-caste male respondents. These online respondents are also most likely to be victims of political or other disinformation campaigns spread on the Internet, as they are the population often recruited into WhatsApp groups (Chauchard and Garimella Reference Chauchard and Kiran Garimella2022). Thus, the online Indian population is an ideal target to test our hypotheses. Finally, because of medical and ethical concerns during the pandemic, we determined that the safest way to run such a study would be online as opposed to in-person, so as to not put any potential survey enumerators in harm’s way. Key demographics of the sample are in Appendix D. Note that we deliberately limit our sample to Hindu respondents, to match the ‘Hindu’ nature of our corrections. While parallel conditions adapted to other religions are possible, we focus here on the majority group in India to maximize the availability of a large sample.

Results

We first discuss descriptive findings on the prevalence of misinformation in our sample, and crucially, whether religiosity correlates with belief in misinformation. Next, we present the main effect of our experimental treatments on vulnerability to misinformation. Finally, additional tests compare the relative effectiveness of different treatment conditions, including robustness checks.

Descriptive Findings

Figure 2 shows the 12 stories that comprise our misinformation outcome measure, plotting the percentage of respondents who incorrectly assessed each headline, indicating their vulnerability to misinformation. For false stories, this represents the percentage of respondents who believed the headline was true; for true stories, it shows the percentage who thought the headline was false. Two key observations stand out. First, respondents endorse misinformation at high rates, with over 50 per cent of respondents supporting each false headline, and some stories seeing even higher endorsement rates. For instance, more than three-quarters of the sample believed the claim that Covid is a Chinese biowarfare weapon, and about 65 per cent agreed that homeopathy – an alternative medicine system with roots in traditional Hindu culture – can cure Covid. These high levels of endorsement align with previous research on misinformation in India (Guess et al. Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020). Second, respondents were more likely to misclassify false stories than true ones, with fewer wrongly identifying true headlines as false. On average, respondents correctly classified 6.02 out of 12 stories, highlighting the widespread presence of misinformation.

Next, we sought to determine to what extent vulnerability to misinformation is correlated with respondents’ religiosity. To measure vulnerability to misinformation, we count the number of headlines that respondents correctly classified as true or false. To measure religiosity, we create a continuous scale using the battery of eight items described in Appendix K. We score each of the items such that higher values indicate that someone is more religious; we then add the eight scores and standardize the measure such that we have a scale of religiosity with mean 0 and standard deviation 1.

In Figure 3, we graph the predicted number of stories accurately classified as a function of religiosity and demonstrate that those who score low on the religiosity scale are significantly better at discerning true from false information relative to those who score high on the religiosity scale. In fact, respondents with the lowest levels of religiosity are able to correctly classify almost double the number of headlines (about 9 headlines) relative to respondents with the highest levels of religiosity (about 4.5 headlines). In line with Haidt’s (Reference Haidt2012) argument, this finding highlights that religious respondents tend to be more gullible regarding information in general, and falsehoods in particular.Footnote 11

Figure 3. Belief in Misinformation By Religiosity.

We thus find strong support for our hypothesis (H1) that religiosity is descriptively associated with endorsement of misinformation. The most religious subset of our sample appears to be almost 100 per cent worse off in terms of vulnerability to misinformation. We also find that the relationship between religiosity and belief in misinformation holds, controlling for several other covariates, most crucially party identity (see Appendix H), which suggests that religiosity does not merely proxy for support for the ruling religious party. Further, since we posit that religion is about social identity as well as about morality or beliefs, we examine whether religious affective polarization is linked to endorsement of misinformation. We measure religious polarization by asking respondents whether they would be upset if a friend married someone who was a Muslim. We find that as respondents get less upset (or are less affectively polarized) on this measure, they are more likely to significantly identify misinformation. That is, those who are less religiously polarized are also less vulnerable to misinformation (see Appendix J). These descriptive findings underscore that religious practice is linked with misinformation endorsement and that antipathy towards religious outgroups is also associated with the endorsement of misinformation.

In sum, these analyses give weight to the argument that vulnerability to misinformation has religious roots. Endorsing misinformation is a function not just of individuals’ religious beliefs, but also of their affect towards religious outgroups.

Experimental Findings

Since religiosity strongly correlates with the endorsement of misinformation, can religious beliefs and identities be leveraged for good? We now move to discussing experimental results. All estimates are based on ordinary least squares (OLS) regressions.

To test H2a and H2b, we evaluate the effect that the different treatments have on respondents’ endorsement of misinformation relative to the placebo control condition. Results are presented in Table 1. Our main outcome of interest is a count of respondents’ ability to classify true and false stories in a set of six stories. Per our pre-registration, we estimate the effect of each treatment separately for conspiracy theory misinformation (Column 1) and medical misinformation (Column 2).Footnote 12

Table 1. Main Effect of Treatments (Count DV)

Note: *p < 0.05; **p < 0.01; ***p < 0.001.

Table 2. Main Effect of Treatments (Discernment DV)

Note: *p < 0.05; **p < 0.01; ***p < 0.001.

Results in Table 1 demonstrate that when it comes to conspiracy theories, all treatments significantly decrease endorsement of misinformation. In addition, these effects are substantively large, with those in the Religious Message treatment group demonstrating about a 16 per cent decrease in vulnerability to misinformation relative to control. Although smaller in magnitude, we also see a significant effect of receiving the Standard Correction, demonstrating that even minimal corrections may be able to improve information processing, mirroring existing findings from this context (Badrinathan and Chauchard Reference Badrinathan and Chauchard2023). These results also show interesting variation based on whether the headline itself is about the Muslim minority (see Appendix N).Footnote 13

However, for medical misinformation, we find that while respondents in the Message + Religious Group and Message + Partisan Group treatments are significantly better than placebo group respondents at identifying misinformation, this effect does not obtain for the Religious Message treatment. While this treatment produced the largest positive effect for conspiracy theories, its impact is negligible in the case of medical misinformation: the average treatment effect is indistinguishable from zero. It is important to note that these are additive treatments; hence, the religious and partisan group treatments add an additional layer to the information being presented in the Religious Message treatment, by revealing group norms and the group name. Additionally, we note that the standard correction remains insignificant.Footnote 14

These findings suggest that the effectiveness of correction strategies depends on the type of misinformation (for example, conspiracies v. medical falsehoods). From our findings, it appears that the mechanisms underlying endorsement of conspiracy theories and medical misinformation appear distinct, necessitating tailored approaches for correction. COVID-19 conspiracy theories, such as claims about biowarfare or deliberate virus spread by minority groups, are novel narratives specific to the pandemic. By contrast, medical misinformation in India often involves miracle cures or home remedies linked to entrenched beliefs in alternative medicine systems like homeopathy. These longstanding belief systems may make medical misinformation more resistant to change.

Our findings demonstrate that even standard corrections work to reduce the expression of conspiracy theory beliefs in India, though corrections that draw on religious sources are able to achieve effects of greater magnitude. But for misinformation relying on longstanding belief systems, in addition to religious messaging, tapping into group identity appears crucial, reinforcing the idea that information processing can be affected by elites in networks, or when group norms are fostered with a focus on veracity. These findings also confirm our own qualitative evidence that users in homophilic groups might be pressured into saying they believe certain types of information, whether or not they actually do so. For such deep-rooted misinformation, shifting the norms of information sharing in such contexts appears crucial.

Importantly, we also find that some treatments work beyond the specific story corrected. That is, on receiving a correction for one story, we find a spillover effect that carries forward to other stories. To analyze this, we recalculate our count outcome measure, omitting the specific story that was corrected in the treatment (see Appendix I). This analysis demonstrates that for conspiracy theories, every treatment except the standard correction achieves a significant effect. While the standard correction worked on the specific story that was corrected, spillover effects for non-corrected stories are only seen with the religious message treatments. Crucially, these results suggest that the religious treatments have a comparatively stronger effect overall than the standard correction and that they can have spillover effects on stories that are not directly corrected.

We confirm the robustness of the results in Table 1 by controlling for key demographic and pre-treatment covariates (Appendix E); the main results remain unchanged. We also replicate these findings, controlling for respondent attention during the survey (Appendix F). Finally, we re-run our analyses with a discernment outcome, which calculates the difference between the average accuracy rating for true and false stories.Footnote 15 In Table ??, we find that the main results hold: religious treatments improve respondents’ ability to distinguish true from false information. However, while the results point in the same direction, significance levels are slightly reduced, rendering some effects from Table 1 insignificant. For example, the Message + Religious Group treatment’s effect on belief in conspiracies loses significance. Notably, this is also true for estimates related to standard corrections, suggesting that only religiously framed messages consistently influenced belief discernment, highlighting the unique impact of religious frames in this context.

Next, we test the hypothesis that religious frames are particularly effective for highly religious respondents (H3) by interacting our continuous religiosity measure with a treatment assignment indicator. We find that treatment effects did not vary by religiosity: respondents updated their beliefs regardless of religiosity level (Appendix G). This suggests that the moral weight of religious imperatives resonates broadly with respondents, irrespective of individual religiosity. We also hypothesized that stronger religious or partisan group identities would enhance receptiveness to messaging invoking group norms. However, these effects likewise did not vary with religiosity. These findings imply that the treatments’ impact extends across the sample, making them more broadly effective than anticipated and not limited to specific subgroups.Footnote 16

Finally, to benchmark main effects, we ascertain whether religious messaging and group identity treatments performed better than a standard correction. This allows us to evaluate whether the corrective effects we observe are due to the religious elements of the treatments or simply to exposure to any corrective information. Table 3 presents results where we switch the omitted category in the specification to the Standard Correction treatment.

Table 3. Main Effects Relative to the Standard Correction

Note: *p < 0.05; **p < 0.01; ***p < 0.001.

Looking at conspiracy theories (Column 1), we find that the Religious Message treatment is the only one able to improve upon the standard correction. This is a crucial finding: while all of our experimental treatments performed better than the placebo control, when compared to a standard correction, only the Religious Message treatment achieved a statistically significant effect. Interestingly, we show that the additive treatments invoking group norms are statistically indistinguishable from the standard correction, even if the Message + Partisan Group treatment comes very close to traditional significance levels.Footnote 17 Moreover, as shown in Appendix I, both the Religious Message treatment and the Message + Partisan Group treatment significantly improve on the standard correction when it comes to spillover effects (endorsement of misinformation other than the claim corrected in the treatment). This finding underscores that religious corrections reduced endorsement of conspiracies at greater rates than standard corrections.

However, looking at medical misinformation (Column 2), we find that the three treatment groups remain statistically indistinguishable from the standard correction, similar to Table 1. Even though effects remain insignificant, the sign on the two Message + Group treatments suggests that the mechanism of shifting group norms may be effective to dispel falsehoods for misinformation that is more salient or has been around in public discourse for a longer period. While our relatively small N may constrain our ability to identify such differences between corrective treatments, these findings suggest that relying on religious frames alone may not strongly improve on standard corrections for this type of deep-rooted and more salient information.

Consequently, we may take these findings to mean that the mechanisms through which religion operates are different depending on the type of misinformation at hand. We posit that beliefs in conspiracy theories can be altered via religious frames, which include a moral message. Our Religious Message treatment is centred around a message with a moral imperative: believe the truth and do not slander others. This may suggest that simple, moral messaging is most effective at reducing the endorsement of recent and topical misinformation. Similar to research showing that heightening a sense of civic duty (that is, citizens have an obligation to get the facts right) can reduce partisan motivated reasoning (Mullinix Reference Mullinix2018), we demonstrate that moral imperatives about other groups in society are effective in combating conspiracy theory misinformation.

Results for medical misinformation suggest a different conclusion, namely, that moral messaging may be insufficient. Miracle cures are tied to social norms in the Indian context: the idea that home remedies and alternative medicinal systems can cure diseases is passed down the generations in Indian society (Malhotra Reference Malhotra2023). These ideas are so firmly entrenched that disbelief in them may come with social stigma or fear of alienation. Further, because these are longstanding beliefs not specific to the COVID-19 crisis, they may also be generally more salient. For such deep-rooted beliefs, simple moral messaging (‘believe only the truth’) may be ineffective, as evidenced by the precise null result on that coefficient.

Discussion and Conclusion

In this paper, we present new evidence on the religious roots of misinformation in India as well as ways to mobilize religious identity for social good. We first find a strong connection between religiosity and belief in COVID-19 misinformation. Those who score high on a religiosity scale and display religious affective polarization are significantly more likely to endorse misinformation. Second, in the context of an experiment, we show that corrective treatments, including religious frames, are effective at reducing the endorsement of misinformation, sometimes more effective than standard corrections, and work beyond the specific story corrected. This suggests that religion and endorsement of misinformation are causally related, and more importantly, that religious beliefs and identities may provide a promising basis on which to build more effective corrections.

These findings suggest that many Indians, and Hindus (over 80 per cent of the population), are open to interpreting health crises through a religious lens. The effectiveness of religious messages in framing misinformation as problematic, even among highly religious individuals, is both novel and significant. This highlights the malleability of misinformation susceptibility to religiously framed interventions, diverging from prior research emphasizing the constraints of motivated reasoning (Flynn, Nyhan and Reifler Reference Flynn, Nyhan and Reifler2017) while aligning with studies indicating belief updating is unaffected by such biases (Coppock Reference Coppock2023). These findings underscore the broader advantages of issue framing and its potential to shape downstream public opinion (Druckman and Nelson Reference Druckman and Nelson2003; Jerit Reference Jerit2008). They also highlight the effectiveness of shifting group norms within polarized and homophilic groups, suggesting the potential for such strategies to influence future political behaviour (Dinas, Martinez and Valentim Reference Dinas, Martinez and Valentim2023).

That respondents can use cues from the treatment to identify additional falsehoods is significant. While Kahneman and Tversky (Reference Kahneman and Tversky1984) argue that individuals readily engage in discriminatory discourse when given the opportunity, our treatments provide a framework that encourages respondents to pause and reflect before expressing beliefs in group settings. We do not equip individuals with tools to enhance scientific aptitude: our treatments do not teach critical thinking skills or techniques to spot misinformation. Rather, we underscore that our treatments likely alter social norms and leverage respondents’ moral and religious sensibilities. Since our goal is to shift belief expression rather than beliefs themselves, we are less concerned about social desirability bias here. If respondents do indeed adjust their responses to appear more socially desirable, this is still a valuable outcome: shifting what citizens think is acceptable to state publicly in a group setting is consequential, especially in polarized societies.

Despite these positive findings, we consider some limitations of the study and avenues for future research. First, we note that while we focus on religion in this paper, we cannot truly disentangle the causal effects of religious and partisan identity. In the Indian context, while religion itself has been a long-standing social cleavage, parties tap into religious beliefs in order to further their own causes (Wilkinson Reference Wilkinson2006). In our data, too, religiosity is correlated with increasing support for the BJP. Thus, our data do not allow us to disentangle the relative influence of religion and partisanship, and we remain agnostic about their relative weight as drivers. While it is theoretically likely that religion drives beliefs in misinformation, we cannot empirically determine with our design whether this relationship is orthogonal to party identity.

Next, we underscore that a core element of our treatment – verses from Hindu religious texts – is necessarily context-specific. However, we believe the premise of our study, the idea that treatments should target mechanisms and identities that drive belief in falsehoods in the first place, is applicable to several other contexts. Other developing countries, such as Afghanistan, Madagascar, Mali, Mexico, and Brazil, not only share commonalities in the type of misinformation but also have social media environments that rely heavily on encrypted platforms such as WhatsApp. Further, as Nyhan (Reference Nyhan2021) notes, such an approach would also do well to reduce the uptake of misinformation in the Western world. Indeed, recent data demonstrate that evangelical Christians in the USA are not only more likely to believe in QAnon narratives but also in conspiracies about the 2020 election, vaccines, or the moon landing (O’Donnell Reference O’Donnell2021). Highly religious individuals are also found to perceive more social threat from scientists (Chinn et al. Reference Chinn, Hasell, Roden and Zichettella2023). Across contexts, the least religious appear to be the least credulous. As polarization intensifies around the world, there are lessons to be drawn from these data for developing countries and Western contexts alike.

Additionally, several of our treatments are intentionally bundled. To maximize treatment effectiveness, we combined the group norms treatment with the religious message treatment. As a result, we cannot isolate the independent effect of changing group norms alone. We also cannot isolate the religious and partisan elements of the study: all main treatments (except the standard correction) included a religious message, with one treatment additionally including a partisan component. Future research should employ fully factorial designs to disentangle the separate effects of norms and messaging, as well as the separate effects of religious versus partisan messaging.

Finally, we acknowledge that our design involved respondents witnessing corrections rather than being directly corrected. The encrypted nature of WhatsApp poses logistical and ethical challenges for conducting studies within actual WhatsApp groups. To maximize external validity, we used treatments simulating a WhatsApp conversation to approximate a group chat environment, rather than presenting corrections in isolation. While this approach cannot fully replicate a WhatsApp group chat, it offers insights more relevant to platforms like WhatsApp, which are more widely used in the majority of the world relative to Facebook or Twitter. We encourage future research to enhance the external validity of studying encrypted platforms, a critical need for understanding misinformation in the developing world.

Despite these limitations, we believe our results have important implications. Of practical and policy importance, these findings suggest that public health campaigns that use social identity-based frames and messaging to counter misinformation or increase the uptake of health measures may be particularly effective because they resonate with existing values that citizens may have. Contentious issues surrounding crises like the COVID-19 pandemic, such as vaccine uptake and reliance on scientific information, require the long-term and large-scale engagement of citizens. Messages designed to resonate with social and religious identities hold promise as a means to build belief in accurate news over misinformation.

From the standpoint of understanding behaviour in polarized societies, our results have implications for the formation of and adherence to group norms. We demonstrate that even the most religious respondents are willing to abandon some priors (here, conspiracy theories) when prompted to do so. Such changes do not constitute a fundamental transformation of political or social culture, but they do show that modest interventions, at least in the short term, can have significant effects in changing the public expression of beliefs. At scale, this may decrease the amount and prevalence of misinformation in an informational ecosystem, thereby providing a greater frequency of trustworthy sources accessible to individuals (Allen et al. Reference Allen, Howland, Mobius, Rothschild and Watts2020). Increasing the quality of one’s news diet may then, in turn, have downstream consequences on attitudes and behaviours.

Ultimately, we hope this work can contribute to scholarship on the malleability of political norms (Paluck and Green Reference Paluck and Green2009; Green et al. Reference Green, Groves, Manda, Montano and Rahmani2023) as well as to literature on how trusted elites can shift perceptions of norms, eventually paving the way for behavioural change (Boyer et al. Reference Boyer, Paluck, Annan, Nevatia, Cooper, Namubiru, Heise and Lehrer2022). Norm perception is often shifted by signals from influential community members, especially crucial in our context, where WhatsApp groups are curated by local political elites who gain power within communities (Chadwick, Hall and Vaccari Reference Chadwick, Hall and Vaccari2023). In contexts where the roots of belief formation and expression are tied to religion, these findings provide hope that social identities can be marshalled to improve misinformation as well as other democratic outcomes more broadly.

Supplementary material

Supplementary material for this article can be found at https://doi.org/10.1017/S0007123425100616.

Data availability statement

Replication data for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/GSVYA0.

Acknowledgements

For comments and feedback, we are grateful to Adam Auerbach, Adam Berinsky, Emmerich Davies, Richard Fletcher, Gareth Nellis, Alex Scacco, Milan Vaishnav, Ajay Verghese, and Ashutosh Varshney, and seminar participants at NYU, Brown-Harvard-MIT joint seminar, University of Zurich, Aarhus University and APSA and ICA conferences.

Financial support

Funding for this research comes from Leiden University, the Wharton Risk Center’s Russell Ackoff Fellowship and the UPenn Center for the Advanced Study of India’s Sobti Family Fellowship.

Competing interests

The authors declare no competing interests.

Pre-registration and IRB

This study was pre-registered with OSF, and the analysis plan is available at https://osf.io/kt3vc. We received IRB approval from the University of Pennsylvania.

Footnotes

1 Our aim is not to argue that religion is more significant than partisanship; indeed, research from India highlights that the two are deeply intertwined. Religion often drives political participation, and political parties are frequently organized along religious lines. We instead leverage the salience of religion as a social identity in India to develop treatments aimed at improving misinformation outcomes.

2 The two mechanisms we highlight through which religion may affect misinformation – conformity to ingroup norms and cognitive dissonance – are not exhaustive. Another important mechanism is the role of networks. Similar to the technology mechanism discussed by Habyarimana et al. (Reference Habyarimana, Humphreys, Posner and Weinstein2007), religious individuals may be more embedded in networks where misinformation circulates more readily. This would mean that religious individuals are more likely to be misinformed simply because they have greater access to misinformation. Such a mechanism calls for different interventions than the ones we look at in this study, such as diversifying news sources to reduce misinformation exposure.

3 We intentionally designed this study to have treatments with additive components. While relieving perceived ingroup norms could be effective on its own, we chose to focus on treatments we predicted would be most effective, rather than splitting our power across additional treatments, as a fully factorial design would require. Consequently, we are unable to draw conclusions about the independent effect of changing group norms in isolation.

4 In Appendix O, we summarize deviations from our pre-analysis plan (PAP) and report additional analyses that were pre-registered but are not included in the main text due to space constraints. These include: (1) our inadvertent omission from the PAP to register main effect – we mistakenly did not include this in the PAP but note that this did not require additional analyses, as main effects are expected in any experimental design; (2) wording changes to some hypotheses for expositional clarity; and (3) results for pre-registered hypotheses whose analyses are not in the main text but are reported in full in the appendix.

5 Appendix K describes all the items included in the scale.

6 All treatment stimuli are available in Online Appendix B.

7 In all other experimental groups, the group name is blanked out, under the pretence of anonymity.

8 We deliberately excluded a condition with misinformation but no correction to avoid the adverse effects of not immediately correcting misinformation during a sensitive time. Therefore, in every condition with a misinformation stimulus, respondents simultaneously receive a correction.

9 Our headlines were selected from a list of several stories that we pretested. Of these stories, we selected six headlines for each issue on the basis of pretest data on how widely they were believed. Since Indian respondents report high levels of trust in search engines such as Google and Yahoo (Aneez et al. Reference Aneez, Neyazi, Kalogeropoulos and Nielsen2019), we present each story in the form of an actual headline mimicking the style of stories on Google News, with a headline, subheadline, source, and image. But simultaneously, we block out the source so as to mimic the context of WhatsApp messaging, where users receive forwarded text messages without a source, brand, or a URL, with the text of the news/information copied in the body of the WhatsApp message.

10 As a robustness test, we also re-analyze our data with a discernment measure (Table 2).

11 As a robustness check, in Appendix K.1, we break down the scale into individual headline components.

12 In Appendix L, we also present results separating out the stories in the outcome, evaluating the effect of the different treatments on each story individually.

13 Appendix N shows that our three religious treatments do not outperform a minimal standard correction, as indicated by the magnitude and significance of all four coefficients in Table N.1, Column (1). However, two of the religious treatments appear effective when the conspiracies do not target Muslims, while the standard correction does not.

14 As we discuss below, this does not, however, imply that Message + Group treatments performed significantly better than the standard correction.

15 To calculate discernment between true and false stories, we compute averages for true stories (on a 4-point scale where higher = more accurate) and averages for false stories separately. Then we calculate the z-scores for true stories and false stories. Discernment is computed by subtracting z-scores for fake news from z-scores for true news. This measure is the dependent variable in Table ??.

16 While we do not detect heterogeneous effects by religion, religiosity may interact with treatment within specific caste subgroups, particularly among highly religious upper-caste respondents. However, this triple interaction returns insignificant results, likely due to limited statistical power.

17 Importantly, we show in Appendix M (Table M.1, Column 1) that the three religious treatments are themselves not significantly different from each other.

References

Acemoglu, D, Ozdaglar, A and Siderius, J (2021) Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers. Technical report.10.2139/ssrn.3861413CrossRefGoogle Scholar
Allen, J, Howland, B, Mobius, M, Rothschild, D and Watts, DJ (2020) Evaluating the fake news problem at the scale of the information ecosystem. Science advances 6(14), eaay3539. https://www.science.org/doi/pdf/10.1126/sciadv.aay3539 CrossRefGoogle ScholarPubMed
Aneez, Z, Neyazi, TA, Kalogeropoulos, A and Nielsen, RK (2019) India Digital News Report. Reuters Institute for the Study of Journalism.Google Scholar
Attwell, K and Freeman, M (2015) I Immunise: An evaluation of a values-based campaign to change attitudes and beliefs. Vaccine 33(46), 62356240.10.1016/j.vaccine.2015.09.092CrossRefGoogle ScholarPubMed
Badrinathan, S (2021) Educative Interventions to Combat Misinformation: Evidence From a Field Experiment in India. American Political Science Review 115(4), 13251341.10.1017/S0003055421000459CrossRefGoogle Scholar
Badrinathan, S and Chauchard, S (2023) ‘I Don’t Think That’s True, Bro!’ Social Corrections of Misinformation in India. The International Journal of Press/Politics 29(2), 394416.10.1177/19401612231158770CrossRefGoogle Scholar
Badrinathan, S and Chauchard, S (2025) Replication Data for: The Religious Roots of Belief in Misinformation: Experimental Evidence from India. https://doi.org/10.7910/DVN/GSVYA0, Harvard Dataverse, V1.CrossRefGoogle Scholar
Badrinathan, S, Chauchard, S and Siddiqui, N (2025) Misinformation and support for vigilantism: An experiment in India and Pakistan. American Political Science Review 119(2), 947965.10.1017/S0003055424000790CrossRefGoogle Scholar
Baishya, A (2022) Hate in the Time of the Virus: Covid-19, Fake News, and Islamophobia in India. Social Science Research Council. July 28, 2022. https://items.ssrc.org/covid-19-and-the-social-sciences/covid-19-fieldnotes/hate-in-the-time-of-the-virus-covid-19-fake-news-and-islamophobia-in-india/.Google Scholar
Banaji, S and Bhat, R (2019) WhatsApp Vigilantes: An Exploration of Citizen Reception and Construction of WhatsApp Messages’ Triggering Mob Violence in India. Working Paper.Google Scholar
Becker, J, Brackbill, D and Centola, D (2017) Network dynamics of social influence in the wisdom of crowds. Proceedings of the National Academy of Sciences 114(26), 7076.10.1073/pnas.1615978114CrossRefGoogle ScholarPubMed
Berinsky, AJ (2017) Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science 47(2), 241262.10.1017/S0007123415000186CrossRefGoogle Scholar
Blair, R, Gottlieb, J, Nyhan, B, Paler, L, Argote, P and Stainfield, C (2023) Interventions to Counter Misinformation: Lessons from the Global North and Applications to the Global South. USAID Report. https://pdf.usaid.gov/pdf_docs/PA0215JW.pdf.10.1016/j.copsyc.2023.101732CrossRefGoogle Scholar
Bode, L and Vraga, EK (2018) See something, say something: Correction of global health misinformation on social media. Health communication 33(9), 11311140.10.1080/10410236.2017.1331312CrossRefGoogle ScholarPubMed
Boyer, C, Paluck, EL, Annan, J, Nevatia, T, Cooper, J, Namubiru, J, Heise, L and Lehrer, R (2022) Religious leaders can motivate men to cede power and reduce intimate partner violence: Experimental evidence from Uganda. Proceedings of the National Academy of Sciences 119(31), e2200262119.10.1073/pnas.2200262119CrossRefGoogle ScholarPubMed
Brass, PR (1997) Theft of an idol: Text and context in the representation of collective violence. Vol. 8, Princeton, NJ: Princeton University Press. 10.1515/9780691217918CrossRefGoogle Scholar
Brass, PR (2005) Language, religion and politics in North India. Lincoln, NE: iUniverse.Google Scholar
Brennen, JS, Simon, FM, Howard, PN and Nielsen, RK (2020) Types, sources, and claims of COVID-19 misinformation. University of Oxford.Google Scholar
Bridgman, A, Merkley, E, Loewen, PJ, Owen, T, Ruths, D, Teichmann, L and Zhilin, O (2020) The causes and consequences of COVID-19 misperceptions: Understanding the role of news and social media. Harvard Misinformation Review.10.37016/mr-2020-028CrossRefGoogle Scholar
Buccione, G (2023) Religious Messaging and Adaptation to Water Scarcity: Evidence from Jordan. Working Paper.Google Scholar
Bullock, JG, Gerber, AS, Hill, SJ and Huber, GA (2015) Partisan Bias in Factual Beliefs about Politics. Technical Report 4.10.1561/100.00014074CrossRefGoogle Scholar
Bursztyn, L, Fiorin, S, Gottlieb, D and Kanz, M (2019) Moral incentives in credit card debt repayment: Evidence from a field experiment. Journal of Political Economy 127(4), 16411683.10.1086/701605CrossRefGoogle Scholar
Chadwick, A, Vaccari, C and Hall, N-A (2023) What Explains the Spread of Misinformation in Online Personal Messaging Networks? Exploring the Role of Conflict Avoidance. Digital Journalism 12(5), 574593.10.1080/21670811.2023.2206038CrossRefGoogle Scholar
Chadwick, A, Hall, N-A and Vaccari, C (2023) Misinformation rules!? Could ‘group rules’ reduce misinformation in online personal messaging? New Media & Society 27(1), 106126.10.1177/14614448231172964CrossRefGoogle Scholar
Chandra, K (2007) Why ethnic parties succeed: Patronage and ethnic head counts in India. Cambridge, UK: Cambridge University Press.Google Scholar
Chauchard, S and Kiran Garimella, K (2022) What Circulates on Partisan WhatsApp in India? Insights from an Unusual Dataset. Forthcoming. Journal of Quantitative Description: Digital Media. https://bit.ly/3od12uC.10.51685/jqd.2022.006CrossRefGoogle Scholar
Chhibber, P, Jensenius, FR and Suryanarayan, P (2014) Party organization and party proliferation in India. Party Politics 20(4), 489505.10.1177/1354068811436059CrossRefGoogle Scholar
Chhibber, P and Verma, R (2018) Ideology and Identity: The Changing Party Systems of India. New York: Oxford University Press.10.1093/oso/9780190623876.001.0001CrossRefGoogle Scholar
Chinn, S, Hasell, A, Roden, J and Zichettella, B (2023) Threatening experts: Correlates of viewing scientists as a social threat. Public Understanding of Science 33(1), 88104.10.1177/09636625231183115CrossRefGoogle ScholarPubMed
Clayton, K, Blair, S, Busam, JA, Forstner, S, Glance, J, Green, G, Kawata, A, Kovvuri, A, Martin, J, Morgan, E et al. (2019) Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior 42, 10731095.10.1007/s11109-019-09533-0CrossRefGoogle Scholar
Coppock, A (2023) Persuasion in parallel: How information changes minds about politics. Chicago, Illinois: University of Chicago Press. Google Scholar
Crandall, CS and Eshleman, A (2003) A justification-suppression model of the expression and experience of prejudice. Psychological Bulletin 129(3), 414.10.1037/0033-2909.129.3.414CrossRefGoogle ScholarPubMed
Chauchard, S and Badrinathan, S (2025) Replication Data for: The Religious Roots of Belief in Misinformation: Experimental Evidence from India. https://doi.org/10.7910/DVN/GSVYA0, Harvard Dataverse, V1.CrossRefGoogle Scholar
DeMora, SL, Merolla, JL, Newman, B and Zechmeister, EJ (2024) Jesus was a Refugee: Religious Values Framing can Increase Support for Refugees Among White Evangelical Republicans. Political Behavior 46(4), 21452168.10.1007/s11109-024-09912-2CrossRefGoogle Scholar
Dinas, E, Martinez, S and Valentim, V (2023) Social Norm Change, Political Symbols, and Expression of Stigmatized Preferences. Journal of Politics 86(2), 488506.Google Scholar
Druckman, JN and Nelson, KR (2003) Framing and deliberation: How citizens’ conversations limit elite influence. American journal of political science 47(4), 729745.10.1111/1540-5907.00051CrossRefGoogle Scholar
Ecker, UKH, Lewandowsky, S, Cook, J, Schmid, P, Fazio, LK, Brashier, N, Kendeou, P, Vraga, EK and Amazeen, MA (2022) The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology 1(1), 1329.10.1038/s44159-021-00006-yCrossRefGoogle Scholar
Flynn, DJ, Nyhan, B, and Reifler, J (2017) The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology 38, 127150.10.1111/pops.12394CrossRefGoogle Scholar
Gil de Zúñiga, H, Ardèvol-Abreu, A and Casero-Ripollés, A (2019) WhatsApp political discussion, conventional participation and activism: Exploring direct, indirect and generational effects. Information, Communication & Society 24(2), 201218.10.1080/1369118X.2019.1642933CrossRefGoogle Scholar
Goldberg, MH, Gustafson, A, Ballew, MT, Rosenthal, SA and Leiserowitz, A (2019) A social identity approach to engaging Christians in the issue of climate change. Science Communication 41(4), 442463.10.1177/1075547019860847CrossRefGoogle Scholar
Green, DP, Groves, DW, Manda, C, Montano, B and Rahmani, B (2023) A radio drama’s effects on attitudes toward early and forced marriage: Results from a field experiment in Rural Tanzania. Comparative Political Studies 56(8), 11151155.10.1177/00104140221139385CrossRefGoogle Scholar
Grzymala-Busse, AM (2015) Nations under God: How churches use moral authority to influence policy. Princeton, NJ: Princeton University Press. 10.1515/9781400866458CrossRefGoogle Scholar
Guess, AM, Lerner, M, Lyons, B, Montgomery, JM, Nyhan, B, Reifler, J and Sircar, N (2020) A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences 117(27), 1553615545.10.1073/pnas.1920498117CrossRefGoogle ScholarPubMed
Habyarimana, J, Humphreys, M, Posner, DN and Weinstein, JM (2007) Why does ethnic diversity undermine public goods provision? American Political Science Review 101(4), 709725.10.1017/S0003055407070499CrossRefGoogle Scholar
Haidt, J (2012) The righteous mind: Why good people are divided by politics and religion. New York, NY: Vintage. Google Scholar
Hameleers, M (2020) Separating truth from lies: Comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Information, Communication & Society 37(2), 117.Google Scholar
Heath, O (2005) Party systems, political cleavages and electoral volatility in India: A state-wise analysis, 1998–1999. Electoral Studies 24(2), 177199.Google Scholar
Heiss, R, Nanz, A, Knupfer, H, Engel, E and Matthes, J (2023) Peer correction of misinformation on social media: (In) civility, success experience and relationship consequences. New Media & Society 27(4), 22932312.10.1177/14614448231209946CrossRefGoogle Scholar
Jaffrelot, C (2021) Modi’s India: Hindu nationalism and the rise of ethnic democracy. Princeton, NJ: Princeton University Press. Google Scholar
Jerit, J (2008) Issue framing and engagement: Rhetorical strategy in public policy debates. Political Behavior 30, 124.10.1007/s11109-007-9041-xCrossRefGoogle Scholar
Jha, S (2013) Trade, institutions, and ethnic tolerance: Evidence from South Asia. American political Science review 107(4), 806832.10.1017/S0003055413000464CrossRefGoogle Scholar
Kahan, DM, Peters, E, Dawson, EC and Slovic, P (2017) Motivated Numeracy and Enlightened Self-Government. Behavioural Public Policy 1(1), 5486.10.1017/bpp.2016.2CrossRefGoogle Scholar
Kahneman, D and Tversky, A (1984) Choices, values, and frames. American Psychologist 39(4), 341.10.1037/0003-066X.39.4.341CrossRefGoogle Scholar
Kalogeropoulos, A and Rossini, P (2023) Unraveling WhatsApp group dynamics to understand the threat of misinformation in messaging apps. New Media & Society 27(3), 1625–1650.Google Scholar
Kitschelt, H and Wilkinson, SI (2007) Patrons, Clients and Policies: Patterns of Democratic Accountability and Political Competition. Cambridge: Cambridge University Press.10.1017/CBO9780511585869CrossRefGoogle Scholar
Klepper, D and Pathi, K (2024) What’s Wrong With WhatsApp. As India votes, misinformation surges on social media. May 2, 2024; https://apnews.com/article/india-election-misinformation-meta-youtube-703a56c73f9341393f05400ea218b87d.Google Scholar
Kligler-Vilenchik, N (2022) Collective social correction: Addressing misinformation through group practices of information verification on WhatsApp. Digital Journalism 10(2), 300318.10.1080/21670811.2021.1972020CrossRefGoogle Scholar
Larsen, B, Hetherington, MJ, Greene, SH, Ryan, TJ, Maxwell, RD and Tadelis, S (2023) Counter-stereotypical messaging and partisan cues: Moving the needle on vaccines in a polarized US. Science Advances 9(29).10.1126/sciadv.adg9434CrossRefGoogle Scholar
Malhotra, P (2023) Misinformation in WhatsApp Family Groups: Generational Perceptions and Correction Considerations in a Meso-News Space. Digital Journalism 12(5), 594612.10.1080/21670811.2023.2213731CrossRefGoogle Scholar
McClendon, GH and Riedl, RB (2019) From pews to politics: Religious sermons and political participation in Africa. Cambridge, UK: Cambridge University Press.10.1017/9781108761208CrossRefGoogle Scholar
Mishra, M (2021) Indian doctors question plan to hand out guru’s COVID-19 remedy. Reuters. May 26, 2021.Google Scholar
Motta, M, Stecula, D and Farhart, C (2020) How right-leaning media coverage of COVID-19 facilitated the spread of misinformation in the early stages of the pandemic in the US. Canadian Journal of Political Science 53(2), 335342.10.1017/S0008423920000396CrossRefGoogle Scholar
Mullinix, KJ (2018) Civic duty and political preference formation. Political Research Quarterly 71(1), 199214.10.1177/1065912917729037CrossRefGoogle Scholar
Nellis, G (2023) Election cycles and global religious intolerance. Proceedings of the National Academy of Sciences 120(1): e2213198120. 10.1073/pnas.2213198120CrossRefGoogle ScholarPubMed
Nyhan, B (2021) Why the backfire effect does not explain the durability of political misperceptions. Proceedings of the National Academy of Sciences 118(15),10.1073/pnas.1912440117CrossRefGoogle Scholar
O’Donnell, T (2021) Americans who attend church frequently are more likely to view QAnon favorably, poll finds. The Week. July 27, 2021.Google Scholar
Paluck, EL and Green, DP (2009) Deference, dissent, and dispute resolution: An experimental intervention using mass media to change norms and behavior in Rwanda. American political Science Review 103(4), 622644.10.1017/S0003055409990128CrossRefGoogle Scholar
Pennycook, G and Rand, DG (2019) Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 3950.10.1016/j.cognition.2018.06.011CrossRefGoogle Scholar
Pepinsky, TB, Liddle, RW and Mujani, S (2018) Piety and Public Opinion: Understanding Indonesian Islam. Oxford, UK: Oxford University Press. 10.1093/oso/9780190697808.001.0001CrossRefGoogle Scholar
Perrigo, B (2019) How Volunteers for India’s Ruling Party Are Using WhatsApp to Fuel Fake News Ahead of Elections. TIME. January 25, 2019. https://time.com/5512032/whatsapp-india-election-2019/.Google Scholar
Porter, E and Wood, TJ (2019) False Alarm: The Truth About Political Mistruths in the Trump Era. Cambridge, UK: Cambridge University Press.10.1017/9781108688338CrossRefGoogle Scholar
Porter, E and Wood, TJ (2021) The global effectiveness of fact-checking: Evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proceedings of the National Academy of Sciences 118(37): e2104235118. 10.1073/pnas.2104235118CrossRefGoogle ScholarPubMed
Prior, M, Sood, G and Khanna, K (2015) You cannot be serious: The impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science 10(4), 489518.10.1561/100.00014127CrossRefGoogle Scholar
Putnam, RD (2000) Bowling alone: The collapse and revival of American community. New York, NY: Simon and Schuster.Google Scholar
Roozenbeek, J and van der Linden, S (2019) Fake news game confers psychological resistance against online misinformation. Palgrave Communications 5(1), 110.10.1057/s41599-019-0279-9CrossRefGoogle Scholar
Sachdev, C (2017) Fake Medical News Can Now Be Fact-Checked In India. NPR. May 12, 2017.Google Scholar
Saha, P, Mathew, B, Garimella, K and Mukherjee, A (2021) ‘Short is the Road that Leads from Fear to Hate’: Fear Speech in Indian WhatsApp Groups. In Proceedings of the Web Conference 2021. pp. 1110–1121.10.1145/3442381.3450137CrossRefGoogle Scholar
Siddiqui, D (2020) Hindu group offers cow urine in a bid to ward off coronavirus. Reuters. March 14, 2020.Google Scholar
Singh, SS (2019) How to win an Indian election: What political parties don’t want you to know. Gurgaon: Penguin Random House.Google Scholar
Sircar, N (2022) Religion-as-ethnicity and the emerging Hindu vote in India. Studies in Indian Politics 10(1), 7992.10.1177/23210230221082824CrossRefGoogle Scholar
Smith, AE (2019) Religion and Brazilian democracy: Mobilizing the people of God. Cambridge, UK: Cambridge University Press.10.1017/9781108699655CrossRefGoogle Scholar
Taber, CS and Lodge, M (2006) Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science 50(3), 755769.10.1111/j.1540-5907.2006.00214.xCrossRefGoogle Scholar
Valeriani, A and Vaccari, C (2018) Political talk on mobile instant messaging services: A comparative analysis of Germany, Italy, and the UK. Information, Communication & Society 21(11), 17151731.10.1080/1369118X.2017.1350730CrossRefGoogle Scholar
Verba, S, Schlozman, KL and Brady, HE (1995) Voice and equality: Civic voluntarism in American politics. Cambridge, Massachusetts: Harvard University Press.10.2307/j.ctv1pnc1k7CrossRefGoogle Scholar
Verghese, A (2020) Taking other religions seriously: A comparative survey of Hindus in India. Politics and Religion 13(3), 604638.10.1017/S1755048320000280CrossRefGoogle Scholar
Vijaykumar, S, Rogerson, DT, Jin, Y and de Oliveira Costa, MS (2022) Dynamics of social corrections to peers sharing COVID-19 misinformation on WhatsApp in Brazil. Journal of the American Medical Informatics Association 29(1): 3342.10.1093/jamia/ocab219CrossRefGoogle Scholar
Vraga, EK and Bode, L (2017) Using expert sources to correct health misinformation in social media. Science Communication 39(5), 621645.10.1177/1075547017731776CrossRefGoogle Scholar
Vraga, EK, Bode, L and Tully, M (2020) Creating News Literacy Messages to Enhance Expert Corrections of Misinformation on Twitter. Communication Research 49(2), 245267.10.1177/0093650219898094CrossRefGoogle Scholar
Wilkinson, S (2006) Votes and violence: Electoral competition and ethnic riots in India. Cambridge, UK: Cambridge University Press.Google Scholar
Wittenberg, C and Berinsky, AJ (2020) Misinformation and its correction. In Social media and democracy: The state of the field, prospects for reform, 163.10.1017/9781108890960.009CrossRefGoogle Scholar
Yasir, S (2020) India Is Scapegoating Muslims for the Spread of the Coronavirus. Foreign Policy. April 22, 2020. https://foreignpolicy.com/2020/04/22/india-muslims-coronavirus-scapegoat-modi-hindu-nationalism/.Google Scholar
Figure 0

Figure 1. Experimental Flow.

Figure 1

Figure 2. Belief in Misinformation in our Sample.

Figure 2

Figure 3. Belief in Misinformation By Religiosity.

Figure 3

Table 1. Main Effect of Treatments (Count DV)

Figure 4

Table 2. Main Effect of Treatments (Discernment DV)

Figure 5

Table 3. Main Effects Relative to the Standard Correction

Supplementary material: File

Chauchard and Badrinathan supplementary material

Chauchard and Badrinathan supplementary material
Download Chauchard and Badrinathan supplementary material(File)
File 2.6 MB
Supplementary material: Link

Chauchard and Badrinathan Dataset

Link