Introduction
Canonical works in political science recognize the role of religion as a prominent political force in society (Putnam Reference Putnam2000; Verba, Schlozman and Brady Reference Verba, Schlozman and Brady1995). Scholars point to religion’s influence on public policy (Grzymala-Busse Reference Grzymala-Busse2015), public opinion (Pepinsky, Liddle and Mujani Reference Pepinsky, Liddle and Mujani2018), and social cohesion (Nellis Reference Nellis2023), underscoring its potential to shape beliefs, identity, and behaviour. Simultaneously, the last decade has seen a proliferation of scholarly work focusing on understanding why people believe misinformation and ways to counter it (Wittenberg and Berinsky Reference Wittenberg and Berinsky2020; Ecker et al. Reference Ecker, Lewandowsky, Cook, Schmid, Fazio, Brashier, Kendeou, Vraga and Amazeen2022). However, work linking the two strands of research remains largely neglected. To explain the prevalence of misperceptions, misinformation scholars have frequently highlighted the pivotal role of partisan motivated reasoning (Flynn, Nyhan and Reifler Reference Flynn, Nyhan and Reifler2017). Yet, in much of the world outside of Western democracies, religion and ethnicity significantly shape beliefs and preferences, with religious divisions influencing electoral outcomes, political participation, and other behaviours (Sircar Reference Sircar2022; McClendon and Riedl Reference McClendon and Riedl2019; Smith Reference Smith2019). Religion, both independently of partisanship and as a potential driver of it, may therefore also influence belief in misinformation.
How, if at all, does religion shape the endorsement of misinformation? We define religion as (1) adherence to a set of moral principles and (2) membership in a religiously defined identity category, and argue that religion may be connected to belief in misinformation for at least two key reasons. Adherence to longstanding religious moral principles may influence which beliefs are endorsed, while pressures to conform to religious group identities might drive the acceptance or rejection of misinformation. Building on this definition, we explore both descriptive and causal questions in this study. First, are religious beliefs and identities descriptively associated with the endorsement of misinformation? Given the scarcity of empirical evidence on the intersection of misinformation and religion, particularly outside Western contexts, establishing the existence of such a relationship is crucial. Second, if religion does influence misinformation endorsement, what mechanisms underlie this effect, and can these processes be harnessed to reduce vulnerability to misinformation? We answer these questions in the context of India, a country where religion has long been the basis for political mobilization and the formation of political parties (Chhibber and Verma Reference Chhibber and Verma2018; Brass Reference Brass2005). More recently, religious cleavages have resulted in riots as well as vigilante violence in the country, often fueled by misperceptions and rumours (Wilkinson Reference Wilkinson2006; Banaji and Bhat Reference Banaji and Bhat2019; Badrinathan, Chauchard and Siddiqui Reference Badrinathan, Chauchard and Siddiqui2025).
We rely on a combination of original descriptive data and experimental evidence, focusing on the COVID-19 pandemic, which saw a proliferation of medical misinformation and conspiracy theories (Motta, Stecula and Farhart Reference Motta, Stecula and Farhart2020; Brennen et al. Reference Brennen, Simon, Howard and Nielsen2020), alongside a catastrophic number of deaths in India. To answer our descriptive question, we employ a scale of Hindu religiosity with items measuring religious beliefs, practices, and norms, drawing on work by Verghese (Reference Verghese2020). We then show that belief in misinformation in India is strongly correlated with religiosity: those with higher levels of religiosity appear significantly more vulnerable to misinformation. Further, our evidence also suggests that the identity dimension of religion may be related to the endorsement of misinformation: in our sample, respondents who are more vulnerable to misinformation are also more likely to display affective polarization towards the religious outgroup.
Next, to understand the causal relationship between belief in misinformation and religion, we field an experiment. Building on our definition of religion as adherence to a set of principles and membership in an identity category, we explore how messaging emphasizing religious principles and religious ingroup norms affects endorsement of misinformation. We recruit a sample of Indian adults representative of the online population, thereby targeting those most often exposed to misinformation in the country (N=1600). Respondents are shown WhatsApp conversations with a misinformation stimulus, and in treatment conditions, a social correction to that misinformation by another user. We manipulate the content of this social correction, and in some treatments, additionally manipulate its source. In all treatment conditions, we test whether framing misinformation as morally problematic from a religious standpoint helps dispel falsehoods. To do so, we use original verses from ancient Hindu religious scriptures to back up corrections – these texts emphasize the importance of morality and truth. In a subset of treatment conditions, we additionally manipulate the religious identity of the group chat to signal a religious ingroup and test whether religious ingroup disapproval of misinformation further helps reduce its endorsement. We measure the effect of these treatments on the two types of popular falsehoods which circulated in India during and after the pandemic: conspiracy theories and medical misinformation.
Our results show that religiously-framed corrections are successful at shifting misinformed beliefs, in some cases outperforming standard corrections. But we also find that the efficacy of religious frames varies by type of misinformation. With regards to conspiracy theories, all religiously-framed treatments were successful at correcting misinformation, compared to a placebo control condition. Importantly, we show improvements in respondents’ ability to detect misinformation beyond the specific misinformation stimulus used in our treatments. Respondents are able to take cues from the treatment and accurately identify additional falsehoods. Next, we compare these treatments to a standard correction to evaluate whether corrective effects are due to the religious components of the treatment or simply to any corrective information. When compared to a standard social correction, including a religiously-framed moral message increases the effectiveness of corrections. Further, we demonstrate that only religious corrections significantly reduce endorsement of additional falsehoods beyond the corrected story. By contrast, for medical misinformation, a religiously-framed moral message alone fails to reduce endorsement of misinformation. However, combining it with a manipulation of group identity – and thus perceived group norms – does produce an effect (though this effect does not significantly improve upon a standard correction).
These findings have a number of implications for scholarship and policy. Most importantly, they confirm the argument that religious principles and identities drive the endorsement of misinformation. They also highlight the persistent nature of more deeply rooted misinformed beliefs. Recently viral (and thereby perhaps more salient) misinformation – such as conspiracy theories specifically about the pandemic, in this context – might be easier to correct: we find that more treatments are able to effectively attenuate these beliefs, even beyond a standard correction. However, deep-rooted beliefs which have existed since before COVID-19, such as medical misinformation relying on traditional belief systems, might be harder to dislodge, including when corrections invoke religion. Our experiment also builds on previous work on social corrections (Bode and Vraga Reference Bode and Vraga2018; Badrinathan and Chauchard Reference Badrinathan and Chauchard2023), and suggests that further attention to the role of religion and the mechanisms through which it operates in polarized systems is warranted in the misinformation literature. Our findings provide hope that both traditional belief systems and social identities can be marshalled to reduce vulnerability to misinformation.
Theoretical Expectations
Across cultures, religion fosters moral communities, shared values, and social connection. However, scholars of the psychology of religion have long argued that the cohesion and trust within religious communities may come at the cost of rationality (Haidt Reference Haidt2012). This group embeddedness can amplify the endorsement of false beliefs and flawed reasoning, suggesting that religiously motivated reasoning may drive misinformation belief, particularly among the highly religious. This study examines this premise in the context of India – a critical case given its population, comprising one in five people globally and nearly half of those in developing countries.
The Indian Context
Indian politics has long been dominated by a fundamental cleavage between Hindus and Muslims, and the prominence of religion as a social identity has been central. It is the basis of political mobilization, nationalism, and the formation of religiously motivated political parties (Brass Reference Brass2005). In 2021, a Pew Research Center survey found that Hindus tend to link their religious identity to national identity: 81 per cent of Hindus said it was important to be Hindu to be truly Indian, while a significantly smaller proportion of respondents from other religious groups felt the same. More generally, religious divides in India have historically determined not only electoral results (Chandra Reference Chandra2007; Sircar Reference Sircar2022) but also patterns of violence and support for violence (Wilkinson Reference Wilkinson2006; Jha Reference Jha2013; Badrinathan, Chauchard and Siddiqui Reference Badrinathan, Chauchard and Siddiqui2025).
Key to understanding the prominence of religion as an identity in modern India is the Bharatiya Janata Party (BJP), which epitomizes the importance of religion, and specifically Hinduism, in popular discourse. The party frequently employs puritanical rhetoric and moral appeals (Jaffrelot Reference Jaffrelot2021), leveraging Hindu symbols and figures for political gains, resulting in narratives that sometimes rely on misinformation. Since 2014, some political leaders in India have promoted pseudoscientific remedies such as homeopathy and ayurveda, often citing their roots in traditional Hindu practices. For example, in March 2020, a Hindu religious group – supported by a member of parliament – organized a 200-person event advocating cow urine as a COVID-19 cure (Siddiqui Reference Siddiqui2020). Separately, efforts to define a national identity rooted in majoritarian values have at times been associated with conspiratorial misinformation targeting minorities, particularly Muslims (Jaffrelot Reference Jaffrelot2021). During the COVID-19 crisis, sources aligned with the ruling establishment were reported to have circulated narratives blaming minority groups for the spread of the virus (Yasir Reference Yasir2020). This misinformation is harmful: belief in miracle cures can lead to ignoring public health measures like social distancing (Bridgman et al. Reference Bridgman, Merkley, Loewen, Owen, Ruths, Teichmann and Zhilin2020), while scapegoating minorities exacerbates polarization and violence (Banaji and Bhat Reference Banaji and Bhat2019). These examples highlight how conspiracy theories and medical misinformation often invoke religious beliefs and identities, both directly and indirectly.
Because much of this misinformation circulates on encrypted platforms like WhatsApp, where the source of a message cannot be traced, its suppliers and creators often remain unidentified. However, evidence suggests that right-wing political and religious figures in India play a central role in making misinformation more salient (Perrigo Reference Perrigo2019; Singh Reference Singh2019). While the intentions behind spreading such content are hard to determine, landmark studies on rumours in South Asia (Brass Reference Brass1997; Wilkinson Reference Wilkinson2006) suggest that anti-minority claims are often disseminated intentionally to either entrench religious divides through threats of violence or deepen Hindu sentiment by framing India – a diverse and constitutionally secular nation – as primarily a Hindu country (Baishya Reference Baishya2022). Observers note that misinformation spikes around elections (Klepper and Pathi Reference Klepper and Pathi2024), with ‘ethnic entrepreneurs’ often using religion to spread unverified rumours that fuel violence for electoral gain (Wilkinson Reference Wilkinson2006; Sircar Reference Sircar2022). Social media users who believe or share such stories are likely motivated by alignment with their religious beliefs or perceptions of majority norms (Davies Reference Davies2020).
In sum, both India’s longstanding religious divides and current religious nationalist fervour underscore the possibility of a fundamental association between religion and misinformation in India (Mishra Reference Mishra2021). However, empirical scholarship to date has yet to test whether such an association exists.Footnote 1 A well-established finding in the literature on American political behaviour is that motivated reasoning affects how individuals process information (Flynn, Nyhan and Reifler Reference Flynn, Nyhan and Reifler2017). With misinformation in particular, scholars underscore the importance of partisanship as the basis for motivated reasoning: even when misinformation is corrected, we are more likely to believe it if it aligns with our partisan priors. Evidence on the role of partisanship as a pivotal identity in India, however, is mixed. India’s party system is not historically viewed as ideologically structured: parties are not institutionalized (Chhibber, Jensenius and Suryanarayan Reference Chhibber, Jensenius and Suryanarayan2014), elections are highly volatile (Heath Reference Heath2005), and the party system itself is not ideological (Chandra Reference Chandra2007; Kitschelt and Wilkinson Reference Kitschelt and Wilkinson2007), at least not in a traditional sense (Chhibber and Verma Reference Chhibber and Verma2018). The recent nature of the BJP’s appeals, combined with the historical importance of religion in India, gives credence to the idea that it is not only partisanship, but perhaps also religion, that might drive belief in misinformation.
Given this intuition and findings from previous literature about the role of religiosity in promoting belief in non-rational explanations (Haidt Reference Haidt2012), our descriptive hypothesis predicts that individuals who are highly religious are more likely to endorse misinformation (Hypothesis 1).
Mechanisms of Belief in Misinformation
To determine the causal pathways through which religion might impact belief in misinformation, we field an experiment. Since we cannot manipulate religious identity or belief directly, we manipulate whether messages drawing on explicitly religious principles or originating from religious ingroups affect misinformation endorsement. We do this in the context of a correction experiment by manipulating whether corrections to misinformation draw on religious messages or refer to religious identities. This allows us to test whether different types of religious frames can discourage belief in misinformation and thereby shed light on the religion-misinformation causal link.
In doing so, we build on a large literature on corrective interventions to combat misinformation. In Western contexts where misinformation spreads on public social media such as Facebook, solutions include providing fact-checks and labeling misinformation as false (Porter and Wood Reference Porter and Wood2021; Clayton et al. Reference Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin and Morgan2019), inoculating users (Hameleers Reference Hameleers2020; Roozenbeek and van der Linden Reference Roozenbeek and van der Linden2019), and priming the concept of accuracy (Pennycook and Rand Reference Pennycook and Rand2019). However, in India, as in much of the developing world, information is largely spread through encrypted platforms such as WhatsApp (Gil de Zúñiga, Ardèvol-Abreu and Casero-Ripollés Reference Gil de Zúñiga, Ardèvol-Abreu and Casero-Ripollés2019; Valeriani and Vaccari Reference Valeriani and Vaccari2018). Consequently, platform-based interventions such as adding a false label are not applicable, and solutions to correct misinformation online must necessarily stem from users correcting each other (Vraga, Bode and Tully Reference Vraga, Bode and Tully2020; Bode and Vraga Reference Bode and Vraga2018; Badrinathan and Chauchard Reference Badrinathan and Chauchard2023). Accordingly, we focus on social corrections in this study and build on a small but growing literature highlighting the role of peers correcting each other in online settings (Heiss et al. Reference Heiss, Nanz, Knupfer, Engel and Matthes2023; Vijaykumar et al. Reference Vijaykumar, Rogerson, Jin and de Oliveira Costa2022; Kligler-Vilenchik Reference Kligler-Vilenchik2022).
Group identities, particularly those based on religion, are strong social cleavages in India, and the online environment of WhatsApp may intensify these divides. Users often join private group chats centred on political, religious, or social causes (Chauchard and Garimella Reference Chauchard and Kiran Garimella2022), and such groups are frequently divided along religious lines (Saha et al. Reference Saha, Mathew, Garimella and Mukherjee2021). The insular nature of these private chats can increase vulnerability to misinformation (Kalogeropoulos and Rossini Reference Kalogeropoulos and Rossini2023): WhatsApp’s intimacy fosters a sense of solidarity, making misinformation more likely to be trusted (Davies Reference Davies2020). Indeed, research shows that homophily in networks correlates with increased belief in misinformation (Acemoglu, Ozdaglar and Siderius Reference Acemoglu, Ozdaglar and Siderius2021). Our interview data underscores these intuitions. One respondent explained why she believed a piece of medical misinformation on a WhatsApp group, emphasizing the role of religion in information processing:
It is the right thing to do. Our Hindu religion teaches us that it is the right thing to do – and this is what it truly means for me to be a part of Hindu history and culture, and to pass it down to my children.
Other respondents highlighted group identity and ingroup norms as drivers of information sharing. One participant noted:
Sometimes even if I’m not sure if something is true or not, I don’t want to be the only person not sharing something on the group. So I find any message I think will be popular, I forward it to the [Hindu religious] group. Then if many people like it, I come to know it is true.
These examples show that adherence to religious principles can both drive the endorsement of misinformation and justify such beliefs. Additionally, conformity to religious ingroup norms can intensify pressures to share and endorse information. We conclude that if we are able to challenge these notions – that religion requires adhering to a fixed set of beliefs or that being a ‘good’ member of a religious ingroup entails certain ideas – we could help reduce the endorsement of misinformation.
With this reflection in mind, we design corrections that are meant to appeal to the same psychological traits that make people vulnerable to falsehoods to begin with (Nyhan Reference Nyhan2021). While recent evidence suggests that all types of information can persuade and motivated reasoning can often be overcome (Coppock Reference Coppock2023), we argue that value-based and identity-congruent treatments may be particularly effective in our context due to key differences. Much of the prior research on this topic, including Coppock (Reference Coppock2023), comes from Western settings, where corrections rarely backfire (Porter and Wood Reference Porter and Wood2019). However, limited evidence from India suggests that intensive treatments may fail to drive meaningful change or could worsen outcomes for individuals with strong social identities (Badrinathan Reference Badrinathan2021). This suggests that not all types of information may be equally persuasive in our context. Attwell and Freeman (Reference Attwell and Freeman2015) show, for example, that value-based treatments are more effective in Australia, aligning with other studies that highlight the impact of identity-congruent correction sources (Berinsky Reference Berinsky2017). Beyond misinformation, similar effects have been observed in other domains, such as religious appeals to promote conservation efforts in Jordan (Buccione Reference Buccione2023), religious appeals in Indonesia to improve debt repayment (Bursztyn et al. Reference Bursztyn, Fiorin, Gottlieb and Kanz2019), and even in the US context, where religious appeals increase support for refugees among the most religious (DeMora et al. Reference DeMora, Merolla, Newman and Zechmeister2024). These studies highlight the potential power of interventions rooted in morality, shared values and identity.
We first argue that religion may influence the endorsement of falsehoods because such misinformation can align with longstanding religious beliefs or principles, making its endorsement have moral value. In other words, religious individuals might accept misinformation to avoid cognitive dissonance (Taber and Lodge Reference Taber and Lodge2006). Building on this idea, all our corrective treatments aim to reduce respondents’ dissonance and the perceived moral pressure to embrace misinformation. In addition to morality, we also consider religion as an identity and the role of perceived ingroup preferences. Simply addressing cognitive dissonance may not be sufficient if individuals feel compelled to endorse a piece of information because others in their ingroup do. Indeed, expressing misinformed beliefs may be driven by perceived group norms: individuals may endorse misinformation because they believe others do, and fear of social alienation can increase pressure to conform (Kahan et al. Reference Kahan, Peters, Dawson and Slovic2017). WhatsApp group chats, often organized around social and political causes (Davies Reference Davies2020), can amplify these pressures by fostering unwritten norms that encourage conformity (Chadwick, Vaccari and Hall Reference Chadwick, Vaccari and Hall2023; Kalogeropoulos and Rossini Reference Kalogeropoulos and Rossini2023). For example, research shows that prejudices and hateful rhetoric are typically constrained by values and norms, but are expressed when the situation allows for justification (Crandall and Eshleman Reference Crandall and Eshleman2003). Thus, altering perceived group norms around a belief may reduce its endorsement. This aligns with recent calls from misinformation scholars to focus on changing norms as a strategy for building healthier online communities (Blair et al. Reference Blair, Gottlieb, Nyhan, Paler, Argote and Stainfield2023).Footnote 2
We thus posit that social corrections using religious content to alleviate cognitive dissonance will reduce misinformation endorsement relative to a control condition (Hypothesis 2a). As noted above, we hypothesize these effects because our religious treatment not only primes religious membership but also explicitly encourages moral behaviour. Additionally, we hypothesize that social corrections combining religious content with manipulations of perceived group norms will be effective in reducing misinformation endorsement compared to a control condition (Hypothesis 2b).Footnote 3 We also hypothesize that the effectiveness of religious corrections is a function of the strength of an individual’s religiosity. Specifically, highly religious respondents will be more likely to engage with and be influenced by a religious frame, so we expect the efficacy of corrections to increase with higher religiosity (Hypothesis 3). Additionally, we explore one pre-registered research question: to benchmark the effectiveness of religiously-framed corrections, we compare them to a standard social correction without a religious frame (RQ 1). This comparison helps us assess the relative efficacy of different correction types, not just in comparison to a control group.Footnote 4
Method and Design
To test these hypotheses, we collected original survey data in India (N = 1600) after the second wave of the COVID-19 pandemic in 2021. The first goal of our survey was to field an extensive module of attitudes and perceptions to descriptively evaluate the correlation between religious beliefs and misinformation. Key in our descriptive measures is an index of Hindu religiosity. We build on Verghese (Reference Verghese2020) in conceptualizing Hinduism as practice-centred, and consequently operationalize religiosity as a function of rites and rituals, including features of everyday life such as attire, food habits and adherence to norms. To measure religiosity, we constructed a scale of eight items with questions that measure the practice of the Hindu religion on a quotidian basis, including frequency of prayer, the need to consult an astrologist before fixing a wedding date, frequency of religious fasting, and others.Footnote 5 Next, our survey included a pre-registered experiment. In our experiment, respondents were randomly assigned to one of five conditions in a between-subjects design (see Figure 1), of which four were treatment conditions and the fifth was a placebo control condition.

Figure 1. Experimental Flow.
Treatment Conditions
In all conditions, respondents read fictional but realistic screenshots of conversations on WhatsApp. The screenshots displayed a conversation between two users in a private WhatsApp chat group. In all treatment conditions (the first four conditions in Figure 1), the first user posts a piece of misinformation. In response, the second user uses a variety of correction strategies corresponding to our different treatment groups. In the Religious Message treatment, the social correction of the second user relies on a religious frame. To craft this message, we found real quotes from ancient Hindu religious scriptures that discuss either the truth as an important virtue or the imperative not to slander. The user in the conversation who corrects misinformation posts a verse from these Hindu religious scriptures (the Bhagavad Gita or the Mahabharata) alongside Hindu religious iconography, which together exhort people to consider the truth.Footnote 6
This technique builds on prior work on the importance of issue framing, shown to be successful in using religious frames to shape responses to climate change and other polarizing issues (Goldberg et al. Reference Goldberg, Gustafson, Ballew, Rosenthal and Leiserowitz2019). It also builds on work emphasizing that unlikely sources are more effective, as when Democrats contradict Democrats or when Republicans endorse vaccines (Larsen et al. Reference Larsen, Hetherington, Greene, Ryan, Maxwell and Tadelis2023; Porter and Wood Reference Porter and Wood2019). False messages about miracle cures in India often exhort readers to believe in homespun remedies since they uphold sacred truths from religious scriptures (Sachdev Reference Sachdev2017). In our treatment, we leverage this frequent recourse to religion by demonstrating that religious sources themselves may emphasize restraint from slander and value the truth.
Next, our Message + Religious Group and Message + Partisan Group treatments test whether additionally relieving perceived pressures to conform to the ingroup can attenuate endorsement of misinformation. To manipulate ingroup membership, these WhatsApp groups signal the purpose and identity of the group: the name of the group chat is revealed so as to prime membership to an explicitly religious (Hindu) group or to a religious-partisan group (the BJP).Footnote 7 Concretely, these treatments involve a correction to misinformation, with the correcting user emphasizing the importance of verifying questionable information before posting. Importantly, the corrective treatment is additive: we build on the Religious Message by incorporating both the group norm and group name aspects in the treatment. Their aim is to measure whether religious messages alone can correct misinformation or if manipulating ingroup norms is also necessary. These treatments contribute to a growing body of research demonstrating that structured communication networks can significantly promote social learning, reducing partisan biases on contentious political issues (Becker, Brackbill and Centola Reference Becker, Brackbill and Centola2017; Vraga and Bode Reference Vraga and Bode2017). To address potential validity concerns, we recognize that Hindu ingroups in present-day India may often overlap with partisan (BJP) groups, and thus test the treatment with both identity labels.
Thus, all these treatments include a moral message about religion, with some also incorporating cues about group membership. Unlike other identity categories in India, such as ethnicity or caste, religion’s distinctiveness may lie in its moral dimension, alongside its shared group membership aspect. Thus, all three of our religious treatments emphasize morality, with some also addressing group membership, reflecting the idea that morality may be a defining feature of religion.
To test our hypotheses, we compare the effect of these treatments to both a standard correction and a placebo control. Our Standard Correction treatment provides a social correction without religious content or attempts to shift group norms. In this treatment, the correction is simple and direct: the second user states that the first user’s claim is incorrect. This condition helps isolate whether the observed corrective effects are due to religious messaging or merely exposure to any social correction. We also compare these conditions to a placebo control, where respondents read a WhatsApp conversation on a neutral topic like wildlife or sports, with no misinformation.Footnote 8
We repeat this experimental flow for two issue blocks, (1) conspiracy theories and (2) medical misinformation. We randomize both the block and statement order within each block. Thus, respondents see two successive conversations on WhatsApp, each followed by outcome measures pertaining to the topic of the conversation. They remain in the same randomized condition throughout the experiment. All treatment stimuli are available in Online Appendix B. We underscore here that our primary objective in this study is to influence (reduce) the expression of misinformed beliefs. Research shows the prevalence of expressive responding in surveys (Bullock et al. Reference Bullock, Gerber, Hill and Huber2015; Prior, Sood and Khanna Reference Prior, Sood and Khanna2015). Our treatments do not aim to teach citizens how to distinguish true from false; instead, they aim to shift thinking and norms around belief expression, thereby reducing misinformation endorsement.
Outcomes
We measure the effect of these treatments on the perceived accuracy of two sets of headlines: conspiracy theories and medical misinformation. Importantly, the headlines in our outcome measure include the specific piece of misinformation corrected in the treatment, as well as 3 additional misinformation headlines, along with true headlines. Thus, we are able to measure whether the treatment reduced belief in false headlines beyond the specific story corrected.Footnote 9
Relying on these data, our main outcome of interest, in line with our PAP as well as previous research in this context (Badrinathan Reference Badrinathan2021; Badrinathan and Chauchard Reference Badrinathan and Chauchard2023), is a count of respondents’ ability to correctly identify true and false stories.Footnote 10 Importantly, because we measure respondents’ endorsement of the claim that was discussed in the treatment, as well as their endorsement of other claims, we are additionally able to evaluate whether each correction’s effect extends beyond the specific story corrected in the treatment. The list of headlines that comprise this measure, as well as the rationale for their selection, is available in Appendix C and as part of Figure 2 below.

Figure 2. Belief in Misinformation in our Sample.
Sample Characteristics
We recruited 1600 adult respondents in India through an online panel maintained by an online polling firm, Internet Research Bureau (IRB). Respondents were selected to be as representative as possible of the Indian adult population by age, gender and region. As with most online panels in India, while our sample is not representative of the entire Indian population, it is representative of the subset that has Internet access, which is skewed towards educated, wealthy, pro-BJP and upper-caste male respondents. These online respondents are also most likely to be victims of political or other disinformation campaigns spread on the Internet, as they are the population often recruited into WhatsApp groups (Chauchard and Garimella Reference Chauchard and Kiran Garimella2022). Thus, the online Indian population is an ideal target to test our hypotheses. Finally, because of medical and ethical concerns during the pandemic, we determined that the safest way to run such a study would be online as opposed to in-person, so as to not put any potential survey enumerators in harm’s way. Key demographics of the sample are in Appendix D. Note that we deliberately limit our sample to Hindu respondents, to match the ‘Hindu’ nature of our corrections. While parallel conditions adapted to other religions are possible, we focus here on the majority group in India to maximize the availability of a large sample.
Results
We first discuss descriptive findings on the prevalence of misinformation in our sample, and crucially, whether religiosity correlates with belief in misinformation. Next, we present the main effect of our experimental treatments on vulnerability to misinformation. Finally, additional tests compare the relative effectiveness of different treatment conditions, including robustness checks.
Descriptive Findings
Figure 2 shows the 12 stories that comprise our misinformation outcome measure, plotting the percentage of respondents who incorrectly assessed each headline, indicating their vulnerability to misinformation. For false stories, this represents the percentage of respondents who believed the headline was true; for true stories, it shows the percentage who thought the headline was false. Two key observations stand out. First, respondents endorse misinformation at high rates, with over 50 per cent of respondents supporting each false headline, and some stories seeing even higher endorsement rates. For instance, more than three-quarters of the sample believed the claim that Covid is a Chinese biowarfare weapon, and about 65 per cent agreed that homeopathy – an alternative medicine system with roots in traditional Hindu culture – can cure Covid. These high levels of endorsement align with previous research on misinformation in India (Guess et al. Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020). Second, respondents were more likely to misclassify false stories than true ones, with fewer wrongly identifying true headlines as false. On average, respondents correctly classified 6.02 out of 12 stories, highlighting the widespread presence of misinformation.
Next, we sought to determine to what extent vulnerability to misinformation is correlated with respondents’ religiosity. To measure vulnerability to misinformation, we count the number of headlines that respondents correctly classified as true or false. To measure religiosity, we create a continuous scale using the battery of eight items described in Appendix K. We score each of the items such that higher values indicate that someone is more religious; we then add the eight scores and standardize the measure such that we have a scale of religiosity with mean 0 and standard deviation 1.
In Figure 3, we graph the predicted number of stories accurately classified as a function of religiosity and demonstrate that those who score low on the religiosity scale are significantly better at discerning true from false information relative to those who score high on the religiosity scale. In fact, respondents with the lowest levels of religiosity are able to correctly classify almost double the number of headlines (about 9 headlines) relative to respondents with the highest levels of religiosity (about 4.5 headlines). In line with Haidt’s (Reference Haidt2012) argument, this finding highlights that religious respondents tend to be more gullible regarding information in general, and falsehoods in particular.Footnote 11

Figure 3. Belief in Misinformation By Religiosity.
We thus find strong support for our hypothesis (H1) that religiosity is descriptively associated with endorsement of misinformation. The most religious subset of our sample appears to be almost 100 per cent worse off in terms of vulnerability to misinformation. We also find that the relationship between religiosity and belief in misinformation holds, controlling for several other covariates, most crucially party identity (see Appendix H), which suggests that religiosity does not merely proxy for support for the ruling religious party. Further, since we posit that religion is about social identity as well as about morality or beliefs, we examine whether religious affective polarization is linked to endorsement of misinformation. We measure religious polarization by asking respondents whether they would be upset if a friend married someone who was a Muslim. We find that as respondents get less upset (or are less affectively polarized) on this measure, they are more likely to significantly identify misinformation. That is, those who are less religiously polarized are also less vulnerable to misinformation (see Appendix J). These descriptive findings underscore that religious practice is linked with misinformation endorsement and that antipathy towards religious outgroups is also associated with the endorsement of misinformation.
In sum, these analyses give weight to the argument that vulnerability to misinformation has religious roots. Endorsing misinformation is a function not just of individuals’ religious beliefs, but also of their affect towards religious outgroups.
Experimental Findings
Since religiosity strongly correlates with the endorsement of misinformation, can religious beliefs and identities be leveraged for good? We now move to discussing experimental results. All estimates are based on ordinary least squares (OLS) regressions.
To test H2a and H2b, we evaluate the effect that the different treatments have on respondents’ endorsement of misinformation relative to the placebo control condition. Results are presented in Table 1. Our main outcome of interest is a count of respondents’ ability to classify true and false stories in a set of six stories. Per our pre-registration, we estimate the effect of each treatment separately for conspiracy theory misinformation (Column 1) and medical misinformation (Column 2).Footnote 12
Table 1. Main Effect of Treatments (Count DV)

Note: *p < 0.05; **p < 0.01; ***p < 0.001.
Table 2. Main Effect of Treatments (Discernment DV)

Note: *p < 0.05; **p < 0.01; ***p < 0.001.
Results in Table 1 demonstrate that when it comes to conspiracy theories, all treatments significantly decrease endorsement of misinformation. In addition, these effects are substantively large, with those in the Religious Message treatment group demonstrating about a 16 per cent decrease in vulnerability to misinformation relative to control. Although smaller in magnitude, we also see a significant effect of receiving the Standard Correction, demonstrating that even minimal corrections may be able to improve information processing, mirroring existing findings from this context (Badrinathan and Chauchard Reference Badrinathan and Chauchard2023). These results also show interesting variation based on whether the headline itself is about the Muslim minority (see Appendix N).Footnote 13
However, for medical misinformation, we find that while respondents in the Message + Religious Group and Message + Partisan Group treatments are significantly better than placebo group respondents at identifying misinformation, this effect does not obtain for the Religious Message treatment. While this treatment produced the largest positive effect for conspiracy theories, its impact is negligible in the case of medical misinformation: the average treatment effect is indistinguishable from zero. It is important to note that these are additive treatments; hence, the religious and partisan group treatments add an additional layer to the information being presented in the Religious Message treatment, by revealing group norms and the group name. Additionally, we note that the standard correction remains insignificant.Footnote 14
These findings suggest that the effectiveness of correction strategies depends on the type of misinformation (for example, conspiracies v. medical falsehoods). From our findings, it appears that the mechanisms underlying endorsement of conspiracy theories and medical misinformation appear distinct, necessitating tailored approaches for correction. COVID-19 conspiracy theories, such as claims about biowarfare or deliberate virus spread by minority groups, are novel narratives specific to the pandemic. By contrast, medical misinformation in India often involves miracle cures or home remedies linked to entrenched beliefs in alternative medicine systems like homeopathy. These longstanding belief systems may make medical misinformation more resistant to change.
Our findings demonstrate that even standard corrections work to reduce the expression of conspiracy theory beliefs in India, though corrections that draw on religious sources are able to achieve effects of greater magnitude. But for misinformation relying on longstanding belief systems, in addition to religious messaging, tapping into group identity appears crucial, reinforcing the idea that information processing can be affected by elites in networks, or when group norms are fostered with a focus on veracity. These findings also confirm our own qualitative evidence that users in homophilic groups might be pressured into saying they believe certain types of information, whether or not they actually do so. For such deep-rooted misinformation, shifting the norms of information sharing in such contexts appears crucial.
Importantly, we also find that some treatments work beyond the specific story corrected. That is, on receiving a correction for one story, we find a spillover effect that carries forward to other stories. To analyze this, we recalculate our count outcome measure, omitting the specific story that was corrected in the treatment (see Appendix I). This analysis demonstrates that for conspiracy theories, every treatment except the standard correction achieves a significant effect. While the standard correction worked on the specific story that was corrected, spillover effects for non-corrected stories are only seen with the religious message treatments. Crucially, these results suggest that the religious treatments have a comparatively stronger effect overall than the standard correction and that they can have spillover effects on stories that are not directly corrected.
We confirm the robustness of the results in Table 1 by controlling for key demographic and pre-treatment covariates (Appendix E); the main results remain unchanged. We also replicate these findings, controlling for respondent attention during the survey (Appendix F). Finally, we re-run our analyses with a discernment outcome, which calculates the difference between the average accuracy rating for true and false stories.Footnote 15 In Table ??, we find that the main results hold: religious treatments improve respondents’ ability to distinguish true from false information. However, while the results point in the same direction, significance levels are slightly reduced, rendering some effects from Table 1 insignificant. For example, the Message + Religious Group treatment’s effect on belief in conspiracies loses significance. Notably, this is also true for estimates related to standard corrections, suggesting that only religiously framed messages consistently influenced belief discernment, highlighting the unique impact of religious frames in this context.
Next, we test the hypothesis that religious frames are particularly effective for highly religious respondents (H3) by interacting our continuous religiosity measure with a treatment assignment indicator. We find that treatment effects did not vary by religiosity: respondents updated their beliefs regardless of religiosity level (Appendix G). This suggests that the moral weight of religious imperatives resonates broadly with respondents, irrespective of individual religiosity. We also hypothesized that stronger religious or partisan group identities would enhance receptiveness to messaging invoking group norms. However, these effects likewise did not vary with religiosity. These findings imply that the treatments’ impact extends across the sample, making them more broadly effective than anticipated and not limited to specific subgroups.Footnote 16
Finally, to benchmark main effects, we ascertain whether religious messaging and group identity treatments performed better than a standard correction. This allows us to evaluate whether the corrective effects we observe are due to the religious elements of the treatments or simply to exposure to any corrective information. Table 3 presents results where we switch the omitted category in the specification to the Standard Correction treatment.
Table 3. Main Effects Relative to the Standard Correction

Note: *p < 0.05; **p < 0.01; ***p < 0.001.
Looking at conspiracy theories (Column 1), we find that the Religious Message treatment is the only one able to improve upon the standard correction. This is a crucial finding: while all of our experimental treatments performed better than the placebo control, when compared to a standard correction, only the Religious Message treatment achieved a statistically significant effect. Interestingly, we show that the additive treatments invoking group norms are statistically indistinguishable from the standard correction, even if the Message + Partisan Group treatment comes very close to traditional significance levels.Footnote 17 Moreover, as shown in Appendix I, both the Religious Message treatment and the Message + Partisan Group treatment significantly improve on the standard correction when it comes to spillover effects (endorsement of misinformation other than the claim corrected in the treatment). This finding underscores that religious corrections reduced endorsement of conspiracies at greater rates than standard corrections.
However, looking at medical misinformation (Column 2), we find that the three treatment groups remain statistically indistinguishable from the standard correction, similar to Table 1. Even though effects remain insignificant, the sign on the two Message + Group treatments suggests that the mechanism of shifting group norms may be effective to dispel falsehoods for misinformation that is more salient or has been around in public discourse for a longer period. While our relatively small N may constrain our ability to identify such differences between corrective treatments, these findings suggest that relying on religious frames alone may not strongly improve on standard corrections for this type of deep-rooted and more salient information.
Consequently, we may take these findings to mean that the mechanisms through which religion operates are different depending on the type of misinformation at hand. We posit that beliefs in conspiracy theories can be altered via religious frames, which include a moral message. Our Religious Message treatment is centred around a message with a moral imperative: believe the truth and do not slander others. This may suggest that simple, moral messaging is most effective at reducing the endorsement of recent and topical misinformation. Similar to research showing that heightening a sense of civic duty (that is, citizens have an obligation to get the facts right) can reduce partisan motivated reasoning (Mullinix Reference Mullinix2018), we demonstrate that moral imperatives about other groups in society are effective in combating conspiracy theory misinformation.
Results for medical misinformation suggest a different conclusion, namely, that moral messaging may be insufficient. Miracle cures are tied to social norms in the Indian context: the idea that home remedies and alternative medicinal systems can cure diseases is passed down the generations in Indian society (Malhotra Reference Malhotra2023). These ideas are so firmly entrenched that disbelief in them may come with social stigma or fear of alienation. Further, because these are longstanding beliefs not specific to the COVID-19 crisis, they may also be generally more salient. For such deep-rooted beliefs, simple moral messaging (‘believe only the truth’) may be ineffective, as evidenced by the precise null result on that coefficient.
Discussion and Conclusion
In this paper, we present new evidence on the religious roots of misinformation in India as well as ways to mobilize religious identity for social good. We first find a strong connection between religiosity and belief in COVID-19 misinformation. Those who score high on a religiosity scale and display religious affective polarization are significantly more likely to endorse misinformation. Second, in the context of an experiment, we show that corrective treatments, including religious frames, are effective at reducing the endorsement of misinformation, sometimes more effective than standard corrections, and work beyond the specific story corrected. This suggests that religion and endorsement of misinformation are causally related, and more importantly, that religious beliefs and identities may provide a promising basis on which to build more effective corrections.
These findings suggest that many Indians, and Hindus (over 80 per cent of the population), are open to interpreting health crises through a religious lens. The effectiveness of religious messages in framing misinformation as problematic, even among highly religious individuals, is both novel and significant. This highlights the malleability of misinformation susceptibility to religiously framed interventions, diverging from prior research emphasizing the constraints of motivated reasoning (Flynn, Nyhan and Reifler Reference Flynn, Nyhan and Reifler2017) while aligning with studies indicating belief updating is unaffected by such biases (Coppock Reference Coppock2023). These findings underscore the broader advantages of issue framing and its potential to shape downstream public opinion (Druckman and Nelson Reference Druckman and Nelson2003; Jerit Reference Jerit2008). They also highlight the effectiveness of shifting group norms within polarized and homophilic groups, suggesting the potential for such strategies to influence future political behaviour (Dinas, Martinez and Valentim Reference Dinas, Martinez and Valentim2023).
That respondents can use cues from the treatment to identify additional falsehoods is significant. While Kahneman and Tversky (Reference Kahneman and Tversky1984) argue that individuals readily engage in discriminatory discourse when given the opportunity, our treatments provide a framework that encourages respondents to pause and reflect before expressing beliefs in group settings. We do not equip individuals with tools to enhance scientific aptitude: our treatments do not teach critical thinking skills or techniques to spot misinformation. Rather, we underscore that our treatments likely alter social norms and leverage respondents’ moral and religious sensibilities. Since our goal is to shift belief expression rather than beliefs themselves, we are less concerned about social desirability bias here. If respondents do indeed adjust their responses to appear more socially desirable, this is still a valuable outcome: shifting what citizens think is acceptable to state publicly in a group setting is consequential, especially in polarized societies.
Despite these positive findings, we consider some limitations of the study and avenues for future research. First, we note that while we focus on religion in this paper, we cannot truly disentangle the causal effects of religious and partisan identity. In the Indian context, while religion itself has been a long-standing social cleavage, parties tap into religious beliefs in order to further their own causes (Wilkinson Reference Wilkinson2006). In our data, too, religiosity is correlated with increasing support for the BJP. Thus, our data do not allow us to disentangle the relative influence of religion and partisanship, and we remain agnostic about their relative weight as drivers. While it is theoretically likely that religion drives beliefs in misinformation, we cannot empirically determine with our design whether this relationship is orthogonal to party identity.
Next, we underscore that a core element of our treatment – verses from Hindu religious texts – is necessarily context-specific. However, we believe the premise of our study, the idea that treatments should target mechanisms and identities that drive belief in falsehoods in the first place, is applicable to several other contexts. Other developing countries, such as Afghanistan, Madagascar, Mali, Mexico, and Brazil, not only share commonalities in the type of misinformation but also have social media environments that rely heavily on encrypted platforms such as WhatsApp. Further, as Nyhan (Reference Nyhan2021) notes, such an approach would also do well to reduce the uptake of misinformation in the Western world. Indeed, recent data demonstrate that evangelical Christians in the USA are not only more likely to believe in QAnon narratives but also in conspiracies about the 2020 election, vaccines, or the moon landing (O’Donnell Reference O’Donnell2021). Highly religious individuals are also found to perceive more social threat from scientists (Chinn et al. Reference Chinn, Hasell, Roden and Zichettella2023). Across contexts, the least religious appear to be the least credulous. As polarization intensifies around the world, there are lessons to be drawn from these data for developing countries and Western contexts alike.
Additionally, several of our treatments are intentionally bundled. To maximize treatment effectiveness, we combined the group norms treatment with the religious message treatment. As a result, we cannot isolate the independent effect of changing group norms alone. We also cannot isolate the religious and partisan elements of the study: all main treatments (except the standard correction) included a religious message, with one treatment additionally including a partisan component. Future research should employ fully factorial designs to disentangle the separate effects of norms and messaging, as well as the separate effects of religious versus partisan messaging.
Finally, we acknowledge that our design involved respondents witnessing corrections rather than being directly corrected. The encrypted nature of WhatsApp poses logistical and ethical challenges for conducting studies within actual WhatsApp groups. To maximize external validity, we used treatments simulating a WhatsApp conversation to approximate a group chat environment, rather than presenting corrections in isolation. While this approach cannot fully replicate a WhatsApp group chat, it offers insights more relevant to platforms like WhatsApp, which are more widely used in the majority of the world relative to Facebook or Twitter. We encourage future research to enhance the external validity of studying encrypted platforms, a critical need for understanding misinformation in the developing world.
Despite these limitations, we believe our results have important implications. Of practical and policy importance, these findings suggest that public health campaigns that use social identity-based frames and messaging to counter misinformation or increase the uptake of health measures may be particularly effective because they resonate with existing values that citizens may have. Contentious issues surrounding crises like the COVID-19 pandemic, such as vaccine uptake and reliance on scientific information, require the long-term and large-scale engagement of citizens. Messages designed to resonate with social and religious identities hold promise as a means to build belief in accurate news over misinformation.
From the standpoint of understanding behaviour in polarized societies, our results have implications for the formation of and adherence to group norms. We demonstrate that even the most religious respondents are willing to abandon some priors (here, conspiracy theories) when prompted to do so. Such changes do not constitute a fundamental transformation of political or social culture, but they do show that modest interventions, at least in the short term, can have significant effects in changing the public expression of beliefs. At scale, this may decrease the amount and prevalence of misinformation in an informational ecosystem, thereby providing a greater frequency of trustworthy sources accessible to individuals (Allen et al. Reference Allen, Howland, Mobius, Rothschild and Watts2020). Increasing the quality of one’s news diet may then, in turn, have downstream consequences on attitudes and behaviours.
Ultimately, we hope this work can contribute to scholarship on the malleability of political norms (Paluck and Green Reference Paluck and Green2009; Green et al. Reference Green, Groves, Manda, Montano and Rahmani2023) as well as to literature on how trusted elites can shift perceptions of norms, eventually paving the way for behavioural change (Boyer et al. Reference Boyer, Paluck, Annan, Nevatia, Cooper, Namubiru, Heise and Lehrer2022). Norm perception is often shifted by signals from influential community members, especially crucial in our context, where WhatsApp groups are curated by local political elites who gain power within communities (Chadwick, Hall and Vaccari Reference Chadwick, Hall and Vaccari2023). In contexts where the roots of belief formation and expression are tied to religion, these findings provide hope that social identities can be marshalled to improve misinformation as well as other democratic outcomes more broadly.
Supplementary material
Supplementary material for this article can be found at https://doi.org/10.1017/S0007123425100616.
Data availability statement
Replication data for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/GSVYA0.
Acknowledgements
For comments and feedback, we are grateful to Adam Auerbach, Adam Berinsky, Emmerich Davies, Richard Fletcher, Gareth Nellis, Alex Scacco, Milan Vaishnav, Ajay Verghese, and Ashutosh Varshney, and seminar participants at NYU, Brown-Harvard-MIT joint seminar, University of Zurich, Aarhus University and APSA and ICA conferences.
Financial support
Funding for this research comes from Leiden University, the Wharton Risk Center’s Russell Ackoff Fellowship and the UPenn Center for the Advanced Study of India’s Sobti Family Fellowship.
Competing interests
The authors declare no competing interests.
Pre-registration and IRB
This study was pre-registered with OSF, and the analysis plan is available at https://osf.io/kt3vc. We received IRB approval from the University of Pennsylvania.