Hostname: page-component-68c7f8b79f-fnvtc Total loading time: 0 Render date: 2025-12-21T10:10:52.942Z Has data issue: false hasContentIssue false

Politicians’ misinformation, its correction, and partisanship in Italy

Published online by Cambridge University Press:  16 December 2025

Mario Quaranta*
Affiliation:
Department of Sociology and Social Research, Università di Trento, Trento, Italy
Luca Maria Arrigoni
Affiliation:
Department of Sociology and Social Research, Università di Trento, Trento, Italy
*
Corresponding author: Mario Quaranta; Email: mario.quaranta@unitn.it

Abstract

Political misinformation represents a challenge to contemporary democracies. It is widely acknowledged that misinformation is not only spread by individual users on social media, but also by politicians employing both digital and legacy media to disseminate biased or misleading content to advance their political agendas. This study explores the mechanisms through which misinformed statements made by politicians influence public opinion and examines the effectiveness of corrections from academic/official sources or fact-checking websites, also focusing on the role of partisanship. We investigate whether agreement with a misinformed statement on key policy issues – minimum wage, COVID-19 vaccination, and working hours – increases when it is attributed to a politician, and whether corrections by academic/official sources or fact-checkers reduce agreement. Through survey experiments conducted in Italy, we find that while misinformation from politicians does not always affect agreement with false statements, corrections generally decrease agreement. However, partisanship plays a crucial role, with individuals more likely to resist correction when misinformation comes from politicians they have more positive feelings toward. These findings shed light on the complex relationship between misinformation, the effectiveness of corrective messages, and political identity in shaping public opinion.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Società Italiana di Scienza Politica.

Introduction

Political misinformation – defined as information considered incorrect based on the best available evidence and expert knowledge (Nyhan and Reifler, Reference Nyhan and Reifler2010; Ecker et al., Reference Ecker, Lewandowsky, Cook, Schmid, Fazio and Brashier2022) – poses a critical challenge to contemporary democracies, polarizing voters, distorting political preferences and opinions, and influencing collective decision-making processes (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012). Underlying this phenomenon is the reality that political communication is not always accurate, whether due to honest mistakes, misunderstandings, or deliberate attempts to deceive. The widespread presence of misinformation in today’s media environment has attracted considerable academic interest (Lewandowsky et al., Reference Lewandowsky, Ecker and Cook2017), particularly focusing on the processes and the effectiveness of correcting misperceptions arising from misinformation (Flynn et al., Reference Flynn, Nyhan and Reifler2017).

The rapid diffusion of false news on social media makes it difficult to distinguish between reliable and unreliable information, especially when erroneous information comes from sources perceived as authoritative. Additionally, misinformation comes in many forms. Some errors are simple oversights or misunderstood statistics. There could be rhetorical distortions, like selective omissions or exaggerations. There could also be disinformation campaigns to manipulate or discredit specific targets. Here, we use the term political misinformation to denote any statement that contradicts the best available evidence and the prevailing expert consensus (Nyhan and Reifler, Reference Nyhan and Reifler2010; Ecker et al., Reference Ecker, Lewandowsky, Cook, Schmid, Fazio and Brashier2022).

Belief in inaccurate news is a particular case of political misinformation. Numerous studies have shown that citizens hold false beliefs on various issues, including the economy, climate change, public health, vaccines, and welfare systems (Kuklinski et al., Reference Kuklinski, Quirk, Jerit, Schwieder and Rich2000; Nyhan et al., Reference Nyhan, Reifler and Freed2014; Berinsky, Reference Berinsky2017; Schmid and Betsch, Reference Schmid and Betsch2019Reference Kuklinski, Quirk, Jerit, Schwieder and Rich). The rejection of facts supported by scientific/official evidence cannot be attributed solely to a lack of knowledge; rather, it is fueled by a combination of factors such as fear, selective interpretation of information, and the expression of political identities (Ecker et al., Reference Ecker, Lewandowsky, Cook, Schmid, Fazio and Brashier2022).

It is widely acknowledged that misinformation is not solely propagated by individual users on social media, but also by political elites employing both digital platforms and legacy media to disseminate biased or misleading content to advance their political agendas (Swire-Thompson et al., Reference Swire-Thompson, Berinsky, Lewandowsky and Ecker2017; Jerit and Zhao, Reference Jerit and Zhao2020). Indeed, individuals, when exposed to complex information, can use source cues to evaluate it (Gilens and Murakawa, Reference Gilens, Murakawa, Delli Carpini, Huddy and Shapiro2002). In this case, there is an extensive literature about how political elites could be used by individuals as such cues to form their opinions on political issues (Matsubayashi, Reference Matsubayashi2013).

Misinformation, however, could also be corrected. Fact-checking platforms have been developed to verify the accuracy of information, evaluating the overall truthfulness of speeches, claims, and news articles (Boukes and Hameleers, Reference Boukes and Hameleers2022). While considerable efforts have been made to debunk misinformation and conduct fact-checking, the evidence regarding their effectiveness remains uncertain (Ecker et al., Reference Ecker, Lewandowsky and Hogan2017). The characteristics of the corrective messages, in particular their source, are decisive in defining the conditions for their effectiveness (Jerit and Zhao, Reference Jerit and Zhao2020). Thus, do individuals react to corrected misinformation by scientific or official sources and fact-checkers?

This article also takes into account the role of partisanship. This influences public opinion by acting as a perceptual filter that shapes how individuals process information and revise their opinions. Indeed, how individuals are motivated to endorse and share (mis)information that aligns with their partisan identity (Pretus et al., Reference Pretus, Servin-Barthet, Harris, Brady, Vilarroya and Van Bavel2023). Individuals with partisan preferences favor information that aligns with their existing beliefs while dismissing contradictory evidence. Partisans may more easily accept misinformation from the political authorities close to them and refuse fact-checking correction (Thorson, Reference Thorson2015; Wood and Porter, Reference Wood and Porter2019). So, do individuals react to corrected misinformation by scientific or official sources and fact-checkers even when they are partisans?

Compared with the well-charted U.S. landscape, research on misinformation in Italy remains underexplored, particularly regarding political elites and corrective measures (Kozyreva et al., Reference Kozyreva2024), although this country has an important output of misinformation (Broda and Strömbäck, Reference Broda and Strömbäck2024). Indeed, Italian politicians from the whole political spectrum spread misinformation through various media (Canepa and Zagni, 2023). Nevertheless, scholars have rarely considered the effectiveness of fact-checking on individuals’ opinions as politicians spread misinformation. This evidence gap is especially striking given Italy’s relatively high level of political polarization (Vegetti and Mancosu, Reference Vegetti and Mancosu2022) and its low public trust in news (Reuters, 2025). Taken together with the fragmented scholarship on the topic, these contextual factors make Italy a compelling laboratory for studying the role of political elites in the propagation of misinformation and how partisanship influences the effectiveness of corrections by Italian fact–checkers.Footnote 1

This study examines three issues: minimum wage, effectiveness of the COVID–19 vaccination campaign, and weekly working hours in Italy. Each of these topics is linked to a statement voiced by politicians from the country’s three largest parties: Elly Schlein of the Partito Democratico (24.11 % of the votes in the 2024 European elections), Maurizio Gemmato of Fratelli d'Italia (28.75 %), and Giuseppe Conte of the Movimento 5 Stelle (9.98 %).

To address the research questions and gaps, we use survey experiments in which respondents are randomly exposed to misinformed statements publicly made by politicians or to the same statement but corrected by a scientific/official source or a fact-checking website. Then, respondents are asked to provide agreement with statements related to the same issues as those addressed by the politicians. We find that not always the misinformed statement by the politician affect agreement. Yet, when presented with the correction, respondents tend to change their opinions. We also find that partisan respondents, i.e. those who have better feelings toward the party of the politician involved, are more likely to be affected by the politician’s misinformed statement and are more likely to resist correction.

This study aims to contribute to the growing literature on political misinformation in Italy, the amplifying role of elites, the effectiveness of corrections, and the influence of partisanship on their acceptance.

Why do we fall for misinformation?

The processes by which people form opinions and beliefs are of evident public importance, especially when widely held convictions contradict established facts. Political communication holds a crucial influence on shaping public opinion. When politicians consistently spread inaccurate information, such misinformation can underpin political and social decisions that may conflict with the public interest. Politicians, intentionally or not, often serve as potent sources of misinformation (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012).

Once exposed to misinformation, individuals may integrate it into their cognitive framework, particularly when it aligns with existing beliefs or when social dynamics reinforce its credibility (Zhou and Shen, Reference Zhou and Shen2024). Such integration can have enduring effects, as misinformation often intertwines with values and identities. Initial information, despite being debunked, remains embedded in memory, continuing to influence reasoning and future judgments (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012). Several studies have demonstrated that people continue to refer to debunked information even after learning of its inaccuracy (Ecker and Ang, Reference Ecker and Ang2019). These specific types of biases occur when individuals draw inferences or adopt beliefs that are not adequately supported by logical reasoning or sufficient evidence. People assess the truthfulness of information by relying on peripheral cues. These cues include the familiarity of the message, the ease with which the information can be processed, and the internal coherence (Begg et al., Reference Begg, Anas and Farinacci1992).

The “confirmation bias” plays a pivotal role in misinformation, prompting individuals to seek, interpret, and retain information aligning with their preexisting beliefs, disregarding contradictory evidence (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012; Flynn et al., Reference Flynn, Nyhan and Reifler2017). This phenomenon is linked to motivated reasoning, where information processing is guided by personal desires and interests (Taber and Lodge, Reference Taber and Lodge2006). These motivations shape how people interpret, process, and react to information that challenges their worldview. Although people may aspire to accuracy, they often fail to overcome their preconceptions, even when encouraged to maintain an objective stance (Taber and Lodge, Reference Taber and Lodge2006).

In addition to the psychological and cognitive factors that lead individuals to believe in misinformation, the surrounding digital and social environments must also be considered. The network effects of social media accelerate the spread of both accurate and false information compared to traditional media. The surge in online content consumption has led businesses and political parties to share ambiguous or false content to influence decisions for financial or political gain. Additionally, repeated exposure to belief-aligned content, driven by confirmation bias, fosters the echo-chamber effect, increasing credibility and shareability (Muhammed and Mathew, Reference Muhammed and Mathew2022). This effect is further amplified by digital platforms’ algorithms, which personalize content delivery and limit the diversity of accessible information. These structural features of the media ecosystem intertwine with cognitive, social, and emotional factors, playing a crucial role in shaping how misinformation is shared and accepted (Ecker et al., Reference Ecker, Lewandowsky, Cook, Schmid, Fazio and Brashier2022). The alignment between false narratives and partisan identities poses significant challenges, especially in highly polarized societies where misinformation exploits existing divisions to gain credibility and persist (Flynn et al., Reference Flynn, Nyhan and Reifler2017).

Assessing the role of politicians and correction in political misinformation

A significant body of literature focuses on the circulation of misinformation online, including concerns about the roles of foreign actors, conspiracy theorists, or extremists, and how to counter its spread, influence, and societal impact (Pennycook et al., Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021). However, there has been less focus on the extent to which political elites contribute to misinformation. In the “post-truth era,” political leaders often manipulate data through alternative interpretations, make bold claims, and at times resort to deliberate deception to gain partisan advantage (Lilleker and Pérez-Escolar, Reference Lilleker and Pérez-Escolar2023).

Research on political misinformation has largely focused on partisan distortions and confirmation biases (Cohen, Reference Cohen2003; Flynn et al., Reference Flynn, Nyhan and Reifler2017; Jerit and Zhao, Reference Jerit and Zhao2020), often overlooking the crucial role of political leaders in disseminating misleading content (Bullock, Reference Bullock2011). Empirical studies shed light on why statements by political leaders resonate with the public. Research confirms that citizens are more likely to rely on source cues when evaluating politically complex issues (Gilens and Murakawa, Reference Gilens, Murakawa, Delli Carpini, Huddy and Shapiro2002). Ratneshwar and Chaiken (Reference Ratneshwar and Chaiken1991) showed that, when faced with messages that are difficult to understand, individuals predominantly rely on the perceived credibility of the source to determine their level of agreement with the content. Conversely, greater message comprehensibility significantly reduces the importance attributed to source credibility. In this context, as a resource-saving strategy, individuals tend to rely on trusted experts and political elites to form opinions on political issues, thereby avoiding the need to analyze the details independently (Matsubayashi, Reference Matsubayashi2013).

Indeed, politicians employ strategic signaling and media campaigns to legitimize false or misleading narratives, thereby reinforcing partisan identities (Jerit and Zhao, Reference Jerit and Zhao2020). While individuals may generally distrust what they see on social media, they are more likely to believe political leaders, even when these leaders convey messages that contradict verified information (Bisbee and Lee, Reference Bisbee and Lee2022). Politicians exploit these tendencies by discrediting opposition leaders, media, or fact-checkers, firmly depicting them as untrustworthy and shielding supporters from corrective information (Garrett and Poulsen, Reference Garrett and Poulsen2019). This approach perpetuates falsehoods and strengthens partisan identity, undercutting neutral or external corrections (Vraga and Bode, Reference Vraga and Bode2017). This dynamic is rooted in the influence politicians exert by virtue of their prestige and authority (Cohen, Reference Cohen2003). Various studies in political psychology and communication demonstrate that attributing a statement to a specific politician significantly impacts the level of agreement or disagreement expressed by recipients, regardless of the actual content of the statement itself (Taber and Lodge, Reference Taber and Lodge2006; Bullock, Reference Bullock2011). People tend to use the origin of the statement as a cognitive shortcut to assess its credibility and desirability, often more than they rely on a thorough analysis of the facts (Zaller, Reference Zaller1992).

Therefore, individuals may use a cue, that is, the fact that something is said by a political authority, to assess the credibility of the message, and this might affect an individual’s preference about a certain issue. If this mechanism holds, then there could be an effect due to the political authority, which we hypothesize as follows:

H1: Individuals will be more likely to agree with a misinformed statement when that is made by a politician, compared to when that is not made by a politician.

Fact-checking represents one of the most widespread innovations to combat the spread of misinformation. It is a practice that involves systematically verifying the accuracy of statements made by public figures and institutions, with the explicit goal of determining their veracity (Walter et al., Reference Walter, Cohen, Holbert and Morag2019). As “alternative facts” permeate modern democratic politics, fact-checking has become essential for verifying and challenging politicians’ claims (Barrera et al., Reference Barrera, Guriev, Henry and Zhuravskaya2020). The credibility and neutrality of fact-checkers should make their assessments more influential and effective, especially in situations with conflicting information that creates uncertainty about whom to trust (Walter et al., Reference Walter, Cohen, Holbert and Morag2019). Although fact-checking is a valuable tool against misinformation, a recent study revealed significant limitations in both its reach and coverage (Wack et al., Reference Wack, Duskin and Hodel2024). Users engaged with false content have only a slim chance of later encountering fact-checks. Even when misinformation is corrected, the audiences that would benefit most are often the least likely to see the corrections. Moreover, because false claims vary widely in form and salience, not every information is verified.

Despite the expansion of fact-checkers and their increasing integration into traditional journalism, suggesting a reduction in the impact of misinformation, the available empirical evidence is mixed. Some research shows that fact-checking can effectively limit the spread of erroneous information (Fridkin et al., Reference Fridkin, Kenney and Wintersieck2015), while others do not find significant effects (Garrett and Weeks, Reference Garrett and Weeks2013). The literature has extensively demonstrated that the presentation of corrective information can reduce agreement with statements initially influenced by misinformation (Nyhan and Reifler, Reference Nyhan and Reifler2010; Lewandowsky et al., Reference Lewandowsky, Ecker and Cook2017; Vraga and Bode, Reference Vraga and Bode2017). Lewandowsky et al. (Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012) show that fact-based corrections can substantially limit the impact of misinformation on people’s beliefs. Porter et al. (Reference Porter, Velez and Wood2023) find that factual corrections by fact-checker members of the International Fact-Checking Network regarding COVID-19 reduced misinformation, highlighting the enduring effectiveness of corrective information.

The perceived credibility of the source plays a crucial role in determining the effectiveness of fact-checking. Although the impartial nature and authority of fact-checkers should make their judgments more influential, especially in cases where there is a large amount of conflicting information that complicates the distinction between true and false (Walter et al., Reference Walter, Cohen, Holbert and Morag2019), distrust in institutions and experts continues to push many people toward alternative, often less reliable sources (Bash et al., Reference Bash, Meleo, Fera, Jaime and Basch2021). Corrections from authoritative sources such as scientific institutions tend to be perceived as more credible and effective (Vraga and Bode, Reference Vraga and Bode2017; Seo et al., Reference Seo, Xiong, Lee and Lee2022). As highlighted by Shin (Reference Shin2024), the treatment of scientific topics by specialized sources significantly increases the perception of credibility. Conversely, subsequent exposure to post-truth comments, even if corrected from less authoritative sources like bloggers, reduces the effectiveness of the correction (Stedtnitz, Reference Stedtnitz2020). We expect a correction effect to exist. However, the effectiveness of correction is stronger when a scientific/official source is presented compared to when a fact-checking website is used as a source of correction. Therefore, we hypothesize that:

H2: Individuals will be less likely to agree with a misinformed statement made by a politician when that is corrected by a scientific/official source or a fact-checking website, compared to when that is not corrected.

Misinformation and partisanship

Existing research indicates that even individuals who possess substantial political knowledge frequently align their policy views with those advocated by their preferred party (Cohen, Reference Cohen2003). Partisanship functions as a social identity, a subjective perception of belonging to a group, that remains remarkably stable even amidst shifts in political platforms, party leadership, or governance performance (Mason, Reference Mason2015). Partisan identity encourages individuals to defend the preferred party, even when confronted with negative information, and generates strong emotions driving political actions to preserve and amplify the party’s electoral influence (Taber and Lodge, Reference Taber and Lodge2006).

Individuals categorize themselves into distinct social groups, fostering a sense of belonging that significantly shapes their cognitive and behavioral processes. Identification with one’s own group (in-group) often results in a preference for information that reinforces shared identity and values, while information from external groups (out-groups) is more likely to be dismissed or devalued (Tajfel, Reference Tajfel1978). This dynamic can amplify confirmation bias and motivated reasoning, as individuals are inclined to accept and propagate information aligning with their social group’s beliefs, thereby perpetuating and solidifying misperceptions. For instance, Swire-Thompson et al. (Reference Swire-Thompson, Berinsky, Lewandowsky and Ecker2017) investigated the influence of source credibility on how individuals assess the veracity of information originating from a political authority. Their results indicate that when information was attributed to Donald Trump, Republican supporters were more likely to regard it as credible compared to unattributed information, whereas Democrats displayed the opposite response. Another study finds that users with the strongest partisan identities are the most prolific spreaders of misinformation (Osmundsen et al., Reference Osmundsen, Bor, Vahlstrup, Bechman and Petersen2021). Indeed, their antipathy toward the out-party outweighs in-group affection, and within this ecosystem, political elites could play both as spreaders and prime targets of the false narratives. Identity-driven social pressure turns the sharing of false news into both a badge of loyalty to one’s side and a weapon against the out-group.

Directional motives contribute to misinformation insofar as they bias the ways in which people seek out and evaluate political information (Lodge and Taber, Reference Lodge and Taber2013). Additionally, research on motivated reasoning and cognitive biases demonstrates that individuals select messages that reinforce their preexisting beliefs. Within this context, partisanship emerges as a critical driver of political misinformation, as group affiliation shapes not only individuals’ opinions and preferences but also how they interpret and respond to information (Van Bavel and Pereira, Reference Van Bavel and Pereira2018). Two theories help explain the adoption of inaccurate political beliefs (Vegetti and Mancosu, Reference Vegetti and Mancosu2020). Partisan motivated reasoning (PMR) suggests that individuals tend to believe falsehoods that confirm their existing partisan leanings (Flynn et al., Reference Flynn, Nyhan and Reifler2017). This leads them to embrace the positions of affiliated political leaders, propelled by motivated reasoning and partisan identification. Dual process reasoning, on the other hand, distinguishes between intuitive, automatic processing (System 1) and more deliberate, reflective thinking (System 2), emphasizing that political judgments often emerge from initial affective responses, later rationalized (Lodge and Taber, Reference Lodge and Taber2013). Partisan attachment exacerbates the acceptance of misinformation, as individuals are more likely to dismiss corrections or arguments from opposing groups while embracing narratives that align with their identities (Wagner et al., Reference Wagner, Tarlov and Vivyan2014).

The interplay between misinformation, partisanship, and elite influence underscores that the spread and persistence of false information are not solely due to a lack of accurate information but are connected to individuals’ political identity. Misinformation often aligns with partisan identities, which function as social frameworks that cultivate positive attitudes toward one’s in-group while amplifying negative perceptions of out-groups. Against this background, we hypothesize that:

H3: Individuals who are close to the party of the politician making the misinformed statement will be more likely to agree with it, compared to individuals who are not close to the party of the politician making the misinformed statement.

Correction of misinformation and partisanship

Do corrections work in the context of partisanship? Studies on political misinformation have shown that fact-checking can significantly diminish belief in false claims (Prike et al., Reference Prike, Reason, Ecker, Swire-Thompson and Lewandowsky2023). This effect has been observed across the political spectrum, as both supporters and nonsupporters of Trump and Sanders adjusted their beliefs about these politicians’ statements following fact-checks (Swire-Thompson et al., Reference Swire-Thompson, Ecker, Lewandowsky and Berinsky2020). However, a significant challenge stems from individuals’ political affiliation, which can lead to motivated reasoning, making people resistant to updating their beliefs when misinformation from politicians they support is corrected. When a correction challenges these beliefs, partisans may reject it, perceiving it as a threat to the coherence of their values system (Flynn et al., Reference Flynn, Nyhan and Reifler2017).

What happens, however, when the correction is not accepted? Another relevant mechanism is the so-called backfire effect, which occurs when a correction not only fails to reduce belief in misinformation but strengthens it (Nyhan and Reifler, Reference Nyhan and Reifler2010). This effect is particularly evident in highly polarized issues, where the corrective message is perceived as a direct attack on one’s worldview. However, studies have downplayed the frequency of this effect, suggesting that resistance to correction may be better explained by the outright rejection of the message (Wood and Porter, Reference Wood and Porter2019).

Partisans are likely to consider not only the content of a message but also its source when interpreting new information. Before processing the message, they assess the credibility and authority of the source (Benegal and Scruggs, Reference Benegal and Scruggs2018). Consequently, misinformation tends to persist more among individuals with strong partisan identities, despite attempts to correct or debunk false claims and dismiss corrections that challenge the position of a politically aligned figure (Thorson, Reference Thorson2015; Amazeen et al., Reference Amazeen, Thorson, Muddiman and Graves2018). Evidence also indicates that partisan attachments can prompt people to discount fact-checking efforts contradicting statements made by politicians from their own party (Barrera et al., Reference Barrera, Guriev, Henry and Zhuravskaya2020). Benegal and Scruggs (Reference Benegal and Scruggs2018) have shown that climate change corrections are more effective for Republicans when coming from Republican-affiliated sources rather than from scientific sources. Similarly, Berinsky (Reference Berinsky2017) showed that corrections perceived as coming from an individual’s own political affiliation are more effective in rectifying misinformation than corrections from nonpartisan sources (e.g. academic institutions) or sources aligned with the opposing political camp. Another study finds that journalistic fact-checks enhanced the accuracy of respondents’ factual beliefs, including those held by partisan supporters of Trump. Nevertheless, the fact-checks did not affect individuals’ evaluations of the candidate or their voting intentions (Nyhan et al., Reference Nyhan, Porter, Reifler and Wood2020). These findings suggest that the impact of fact-checking efforts on correcting misinformation varies depending on the source (Liu et al., Reference Liu, Qi, Wang and Metzger2023). Indeed, the intrinsic nature of the source could itself significantly shape whether individuals accept or reject corrective information.

For instance, in the health domain, corrections issued by healthcare institutions or universities have proven more effective than those from user-generated content in diminishing false health beliefs (Walter et al., Reference Walter, Brooks, Saucier and Suresh2021). Nevertheless, this may not always be the case. While scientists are typically seen as credible and neutral authorities on issues like climate change, capable of persuading individuals through scientific evidence, partisanship increasingly shapes perceptions of their credibility (Motta, Reference Motta2017). Consequently, partisanship may undermine the effectiveness of corrective messengers. Accordingly, our last hypothesis is:

H4: Although presented with a correction by a scientific/official source or a fact-checking website, individuals who are close to the party of the politician making the misinformed statement will be more likely to agree with it, compared to individuals who are not close to the party of the politician making the misinformed statement.

Research design

Data

This study uses survey data collected between 25 June and 1 July, 2024, during the second wave of a three-wave panel survey. A nationally representative quota sample of Italian residents aged 18 to 70 (N = 1984) was recruited through a Computer-Assisted Web Interviewing (CAWI) survey conducted by a polling agency. Respondents were contacted via email and offered monetary incentives for participation. Invitations were sent iteratively until the sample matched the target population’s key demographic characteristics (age, gender, region of residence, and education).Footnote 2

Design of the experiments

The study consists of three independent experiments, each employing a randomized between-subjects design. In each experiment, participants are randomly assigned to three experimental conditions (EC) consisting of stimulus texts and a control group. This study, similar to Clayton et al. (Reference Clayton2020), uses treatment conditions to explore the impact of misinformation and its correction on participant attitudes and beliefs. Participants are presented with a statement about an issue and then asked to indicate their level of agreement with such a statement. Agreement is measured using an 11-point scale ranging from 0 (“completely disagree”) to 10 (“completely agree”). We selected three misinformed statements made by politicians from three major Italian parties – Partito Democratico (Elly Schlein, secretary of the party), Fratelli d'Italia (Marcello Gemmato, undersecretary of the Ministry of Health), and Movimento 5 Stelle (Giuseppe Conte, president of the party) – which were publicly made and then corrected by fact-checking websites adhering to the standards of the International Fact-Checking Network (IFCN).

The control group is asked to express the level of agreement with the following statements: (a) The introduction of the minimum wage increases wages not only for those earning less than the minimum but also for other workers; (b) The vaccination campaign in Italy has reduced mortality caused by COVID-19; (c) In the EU, Italy is the country where people work the most. EC 1 reports the misinformed statement made by the politician before asking the respondents about their agreement with the statements above. So, EC 1 should capture the effect of the political authority on agreement. ECs 2 and 3, instead, have the same structure as EC 1. Yet, they report that the statement by the politicians is actually incorrect according to a scientific/official source (EC 2) or a fact-checking website (EC 3). These two ECs should capture whether there is a “correction effect” regarding the agreement with the misinformed statements by the politicians. The scientific/official corrections used in the experiments are the same as those employed by the selected fact-checking organizations to address the misinformation spread by the politicians in question. The question wording of the ECs is reported in the Appendix.

Method

To test Hypotheses 1 and 2, we use linear regression models, including as dependent variables the level of agreement with the three statements (see above). The independent variable of interest is the EC, with the control group as the reference category. Then, we include a feeling thermometer scale measuring the feeling toward, respectively, the PD, FDI, and M5S on a scale from 0 (completely negative feeling) to 10 (completely positive feeling) to capture partisanship, that is, how close respondents are to the party of the politician making the statement. Indeed, it has been argued that “feeling thermometers represent a definite improvement over the traditional measure in assessing the attitudinal basis of partisanship” (Greene, Reference Greene2002: 176). To test Hypotheses 3 and 4, this latter variable is interacted with the experimental condition so that we are able to assess whether the effects of the ECs depend on the feeling about the party. The models also control for gender, education, and age. The descriptive statistics and estimates of the models are reported in the Appendix.

Findings

Figure 1 shows the predicted differences in agreement with the statement about the effects of the introduction of the minimum wage, the efficacy of the COVID-19 vaccination campaign, and the amount of weekly working hours between the control group (indicated by the dashed line) and the EC.Footnote 3 The figure indicates that the differences for the minimum wage and the vaccination campaign are not statistically significant. In contrast, the effect can be found for the working hour statement. Indeed, there is a difference of about 0.90 points, from about 5.19 in the control group to 6.08 in the EC 1 group. Figure 1 also shows the differences between ECs 2 and 3 – those in which the politicians’ misinformed statements are corrected by a scientific/official source or a fact-checking website – and the control group. Compared to when the misinformed statement by the politician is simply presented, correction occurs in two cases, namely the minimum wage and the working hours statements. When presented with the evidence that the misinformed statements about the effects of the minimum wage or the amount of worked weekly hours Italians work are incorrect, respondents tend to agree less with the fact that the introduction of the minimum wage increases wages not only for those earning less than the minimum but also for other workers, or that, in the EU, Italy is the country where people work the most, and this applies to both the correction by a scientific/official source and the correction by a fact-checking website. Those presented with the corrected statement by a scientific/official source or a fact-checking website about minimum wage have an average level of agreement that is, respectively, about 0.7 and 1.13 points lower than when the misinformed statement by Elly Schlein is not corrected. Similarly, those presented with the corrected statement about working hours have an average level of agreement that is, respectively, about 1.06 and 0.88 points lower than the misinformed statement by Giuseppe Conte is not corrected.

Figure 1. Predicted differences with 95% confidence intervals between the control group (dashed line) and the experimental conditions (EC) of the agreement with the statements about the introduction of the minimum wage, the efficacy of the COVID-19 vaccination campaign, and the amount of worked weekly hours.

Therefore, agreement with the statements about the minimum wage and the amount of working hours decreased in the ECs 2 and 3 groups, that is, when respondents are presented with evidence contrary to what the politicians, showing that respondents react to the correction. Regarding the effectiveness of Italy’s vaccination campaign, participants exposed to EC 1 exhibited responses comparable to those of the control group, indicating no significant effect of political authority alone. In contrast, the scientific/official correction (EC 2) yielded a larger level of agreement (about 0.50 points), whereas the fact-checker-based correction (EC 3) yielded different levels of agreement with respect to the control group. This suggests that the scientific/official correction increased participants’ propensity to believe that the vaccination campaign had a significant impact on mitigating COVID-19 mortality.

We now evaluate the hypothesis that respondents who are partisans, that is, who express positive feelings toward the party of the politician making the statements, are more inclined to agree with the statements. Figure 2 shows the predicted differences in agreement with the statements between the ECs and the control group by levels of feeling toward the party. The figure shows that expressing a positive feeling toward the party of the politicians making the statements significantly increases the level of agreement with the statements about the role of minimum wage and the amount of working hours in Italy, but not in the case of the vaccination campaign. In the case of EC1, when the feeling for PD is completely negative (at 0), respondents have an agreement score that is about 0.6 points lower than in the control group. As feeling improves, for instance, at 5 and 10, agreement with the statement becomes higher than the control group, about 0.4 and 1.45 points higher, respectively. A similar pattern can be noticed by looking at the statement about the number of working hours. Therefore, it appears that having a more positive feeling toward the party of the politician involved in the statement makes the statement more credible. This implies that misinformation spreads more among those who feel close to a party than to another, at least for the two issues examined. These results confirm that the spread of misinformation is facilitated by partisan identification which explains how attachment to one’s political group leads individuals to accept and rationalize statements that align with their partisan identity, regardless of their veracity (Lodge and Taber, Reference Lodge and Taber2013; Flynn et al., Reference Flynn, Nyhan and Reifler2017), providing support for H3, that partisan respondents would show stronger agreement with the statement.

Figure 2. Predicted differences with 95% confidence intervals between the control group (dashed line) and the experimental conditions (EC) of the agreement with the statements about the introduction of the minimum wage, the efficacy of the COVID-19 vaccination campaign, and the amount of worked weekly hours, conditional on the feeling toward the PD, FDI, and M5S.

Next, are those expressing more positive feelings toward the party of the politician making the statement less receptive to the correction made by a scientific/official source or a fact-checking website? Figure 2 shows that respondents “react” to the correction, although depending on the source and the topic/politician. As far as the minimum wage experiment is concerned, the correction by a scientific/official source seems to have occurred. Yet, agreement with the statement seems to depend on the feeling toward the PD. In fact, the lower the feeling, the less individuals agree with the statement. In this case, those who are very close to PD (have a score of 10) accept the correction, but to a lesser extent with respect to individuals who have a negative feeling (have a score of 0). In fact, the confidence intervals do not overlap. Correction also occurs when this is made by the fact-checking source. The pattern is similar to the one seen before, although less marked. The second experiment shows that the level of feeling does not affect the agreement with the statement when this is corrected by a scientific/official or a fact-checking source, which is no different than in the control group. In the last experiment, we observed that individuals with completely positive feelings toward the M5S were more inclined to agree with the statement, although they were presented with a correction by an academic source (EC 2). In contrast, when feeling is completely negative, the correction occurs, as the level of agreement is lower than in the control group. Eventually, a correction seems to occur when individuals are presented with evidence from a fact-checking website, as there are no differences compared to the control group.

All in all, these results seem to contrast with H4, that partisans would “resist” correction. In fact, we can notice this pattern only in the third experiment, when partisan individuals agree with the statement, although they are exposed to the evidence of a scientific/official source. In the other cases, it seems that positive feelings toward the party of the politician involved do not translate into a stronger agreement with the statement to be evaluated. It might be that the effectiveness of corrections often depends on the type of political issue (Jerit and Zhao, Reference Jerit and Zhao2020). When a topic is closely tied to a person’s political or ideological identity, they tend to interpret information in a way that protects those ties. For this reason, it is challenging to correct misinformation on controversial topics or prominent political figures, as such corrections can challenge the worldview of the recipient (Ecker and Ang, Reference Ecker and Ang2019).

Conclusion

In the post-truth era, politicians do not simply reinterpret data strategically but often make statements without a solid foundation or evidence to gain an advantage (Lilleker and Pérez-Escolar, Reference Lilleker and Pérez-Escolar2023). This study looked at the effect of politicians’ misinformation on public opinion and the possible effectiveness of corrective messages in mitigating such misinformation, also looking at the role of partisanship. The results showed that when politicians report incorrect information, it seems that this affects what individuals think about related issues only in one case, that is the amount of weekly working hours. It might be possible that such an effect is present, as this could be a salient issue, as individuals repeatedly encode and reinforce it by seeking and favouring identity-congruent information (Ecker and Ang, Reference Ecker and Ang2019). Indeed, the issue of labor emerges as highly relevant for supporters of the M5S as the reduction of the weekly working hours had already been included as a point of the 2018 platform of the party (Mosca and Tronconi, Reference Mosca and Tronconi2019). In the other two experiments, we did not find that the politicians’ misinformed statements affected respondents’ agreement. It could be that such a result is due to the politicians or the topics involved.

Regarding the effectiveness of corrections, our findings indicate that both scientific/official sources and fact-checkers successfully reduce agreement with all misinformative statements. In particular, the results suggest that corrections from scientific/official sources tend to be more effective in reducing agreement with misinformation compared to those from fact-checking websites, although the difference between the two sources is not so substantial. This aligns with existing literature, as explicitly apolitical sources, such as scientific/official sources used here, can be more effective in diminishing the acceptance of misinformation (Vraga and Bode, Reference Vraga and Bode2017). In contrast, fact-checkers may be perceived as biased, potentially reducing their credibility and impact.

The following analysis revealed that political affiliation increases the likelihood of falling for misinformation. This suggests that individuals tend to accept and rationalize information consistent with their political identity, even if it is incorrect. However, our findings provide only partial support for the hypothesis that partisans consistently resist corrections. The effectiveness of corrections is not only shaped by the extent to which a topic influences belief-driven reasoning but also by the credibility of the source delivering the correction. When the motivation to uphold a false claim is strong, even credible sources, including experts, may be met with skepticism (Jerit and Zhao, Reference Jerit and Zhao2020). This is evident among M5S supporters, who have shown greater resistance to corrections. This result suggests that corrections are more likely to be effective when they do not challenge an individual’s core beliefs or ideological framework. When a correction aligns with existing attitudes or does not pose a direct threat to one’s worldview, individuals may be more open to reconsidering misinformation and adjusting their stance accordingly (Nyhan and Reifler, Reference Nyhan and Reifler2010). In the case of M5S supporters, the correction may be seen as a challenge to their political identity, leading them to reject it rather than reconsider their stance. This result is a particularly relevant contribution because cases of non-right-wing respondents resisting corrections to misinformation are rare in the literature (Ecker and Ang, Reference Ecker and Ang2019).

Our study faces some limitations. First, research is based on a limited number of topics, which might catch the attention of the respondents in different ways. Moreover, because the experiment relies on authentic political statements and real corrections drawn from various outlets, the experiments are comparable, despite small variations. In the second experiment, we did not detect clear effects. A likely reason could be the familiarity of participants with the topic of COVID-19 vaccination, given its extensive coverage in the media.

Therefore, this study leaves at least two pathways open for future research. First, research could further investigate the authority effect by comparing different political figures to assess how leadership status and ideological positioning influence the acceptance of misinformation. In the post-truth era, many citizens rely more on intuition or sources aligned with their values rather than objective verification. Second, because our study addressed only politicians’ claims, future research could compare misinformation from political elites with that from ordinary social-media users or other sources and examine how well corrections work. Moreover, it would be interesting to explore other characteristics of the source of misinformation – such as gender, message – to test how these factors affect credibility and correction effectiveness.

Funding

This research was supported by the Italian Ministry of University and Research (grant 2022WYEW47 “MIsperceptions, information disorder and polarisation between MEdia and political SYStems”).

Data

The replication dataset is available at http://thedata.harvard.edu/dvn/dv/ipsr-risp.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/ipo.2025.10084.

Competing interests

The authors declare none.

Footnotes

1 In Italy, the main fact-checkers include Pagella Politica, Open, Facta News, Butac, Il Post, and lavoce.info. Pagella Politica verifies political statements using a rigorous method recognized by the International Fact-Checking Network (IFCN), while Open and Facta News focus on viral misinformation. Butac addresses online hoaxes with a popular science approach, Il Post combines fact-checking with journalistic analysis, and lavoce.info provides verification on economic issues based on data and in-depth analysis.

2 The response rate of Wave 1 was 46.5%; the retention rate between Wave 1 and 2 was 82.4%.

3 Predicted values are reported in the Appendix.

References

Amazeen, MA, Thorson, E, Muddiman, A and Graves, L (2018) Correcting political and consumer misperceptions: The effectiveness and effects of rating scale versus contextual correction formats. Journalism & Mass Communication Quarterly 95, 2848.10.1177/1077699016678186CrossRefGoogle Scholar
Barrera, O, Guriev, S, Henry, E and Zhuravskaya, E (2020) Facts, alternative facts, and fact checking in times of post-truth politics. Journal of Public Economics 182, 104123.10.1016/j.jpubeco.2019.104123CrossRefGoogle Scholar
Bash, CH, Meleo, EZ, Fera, J, Jaime, C and Basch, CE (2021) A global pandemic in the time of viral memes: COVID-19 vaccine misinformation and disinformation on TikTok. Human Vaccines and Immunotherapeutics 17, 23732377.10.1080/21645515.2021.1894896CrossRefGoogle Scholar
Begg, I, Anas, AP and Farinacci, S (1992) Dissociation of processes in belief: Source recollection, statement familiarity, and the illusion of truth. Journal of Experimental Psychology: General 121, 446458.10.1037/0096-3445.121.4.446CrossRefGoogle Scholar
Benegal, SD and Scruggs, LA (2018) Correcting misinformation about climate change: The impact of partisanship in an experimental setting. Climatic Change 148, 6180.10.1007/s10584-018-2192-4CrossRefGoogle Scholar
Berinsky, AJ (2017) Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science 47, 241262.10.1017/S0007123415000186CrossRefGoogle Scholar
Bisbee, J and Lee, DDI (2022) Objective facts and elite cues: Partisan responses to COVID-19. The Journal of Politics 84, 12781291.10.1086/716969CrossRefGoogle Scholar
Boukes, M and Hameleers, M (2022) Fighting lies with facts or humor: Comparing the effectiveness of satirical and regular fact-checks in response to misinformation and disinformation. Communication Monographs 89, 6991.Google Scholar
Broda, E and Strömbäck, J (2024) Misinformation, disinformation, and fake news: Lessons from an interdisciplinary, systematic literature review. Annals of the International Communication Association 48, 139166.10.1080/23808985.2024.2323736CrossRefGoogle Scholar
Bullock, JG (2011) Elite influence on public opinion in an informed electorate. American Political Science Review 105, 496515.10.1017/S0003055411000165CrossRefGoogle Scholar
Clayton, K, et al. (2020) Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior 42, 10731095.10.1007/s11109-019-09533-0CrossRefGoogle Scholar
Cohen, GL (2003) Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology 85, 808822.10.1037/0022-3514.85.5.808CrossRefGoogle ScholarPubMed
Ecker, UKH and Ang, LC (2019) Political attitudes and the processing of misinformation corrections. Political Psychology 40, 241260.10.1111/pops.12494CrossRefGoogle Scholar
Ecker, UKH, Lewandowsky, S, Cook, J, Schmid, P, Fazio, LK and Brashier, N (2022) The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology 1, 1329.10.1038/s44159-021-00006-yCrossRefGoogle Scholar
Ecker, UKH, Lewandowsky, S and Hogan, JL (2017) Reminders and repetition of misinformation: Helping or hindering its retraction? Journal of Applied Research in Memory and Cognition 6, 185192.10.1037/h0101809CrossRefGoogle Scholar
Flynn, DJ, Nyhan, B and Reifler, J (2017) The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology 38, 127150.10.1111/pops.12394CrossRefGoogle Scholar
Fridkin, K, Kenney, P and Wintersieck, A (2015) Liar, liar, pants on fire: How fact-checking influences citizens' reactions to negative advertising. Political Communication 32, 127151.10.1080/10584609.2014.914613CrossRefGoogle Scholar
Garrett, RK and Poulsen, S (2019) Flagging Facebook falsehoods: Self-identified humor warnings outperform fact checker and peer warnings. Journal of Computer-Mediated Communication 24, 240258.10.1093/jcmc/zmz012CrossRefGoogle Scholar
Garrett, RK and Weeks, BE (2013) The promise and peril of real-time corrections to political misperceptions. Paper presented at the 2013 Conference on Computer Supported Cooperative Work, San Antonio, 23–27 February.10.1145/2441776.2441895CrossRefGoogle Scholar
Gilens, M and Murakawa, N (2002) Elite cues and political decision-making. In Delli Carpini, MX, Huddy, L and Shapiro, RY (eds), Political Decision-Making, Deliberation and Participation. Greenwich: JAI Press, .Google Scholar
Greene, S (2002) The social-psychological measurement of partisanship. Political Behavior 24, 171197.10.1023/A:1021859907145CrossRefGoogle Scholar
Jerit, J and Zhao, Y (2020) Political misinformation. Annual Review of Political Science 23, 7794.10.1146/annurev-polisci-050718-032814CrossRefGoogle Scholar
Kozyreva, A, et al. (2024) Toolbox of individual-level interventions against online misinformation. Nature Human Behaviour 8, 10441052.10.1038/s41562-024-01881-0CrossRefGoogle ScholarPubMed
Kuklinski, JH, Quirk, PJ, Jerit, J, Schwieder, D and Rich, RF (2000) Misinformation and the currency of democratic citizenship. The Journal of Politics 62, 790816.10.1111/0022-3816.00033CrossRefGoogle Scholar
Lewandowsky, S, Ecker, UKH and Cook, J (2017) Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition 6, 353369.10.1016/j.jarmac.2017.07.008CrossRefGoogle Scholar
Lewandowsky, S, Ecker, UKH, Seifert, CM, Schwarz, N and Cook, J (2012) Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest 13, 106131.10.1177/1529100612451018CrossRefGoogle ScholarPubMed
Lilleker, D and Pérez-Escolar, M (2023) Bullshit and lies? How British and Spanish political leaders add to our information disorder. Javnost - The Public 30, 566585.10.1080/13183222.2023.2244824CrossRefGoogle Scholar
Liu, X, Qi, L, Wang, L and Metzger, MJ (2023) Checking the fact-checkers: The role of source type, perceived credibility, and individual differences in fact-checking effectiveness. Communication Research , 128.Google Scholar
Lodge, M and Taber, CS (2013) The Rationalizing Voter. Cambridge University Press: Cambridge.10.1017/CBO9781139032490CrossRefGoogle Scholar
Mason, L (2015) “I disrespectfully agree”: The differential effects of partisan sorting on social and issue polarization. American Journal of Political Science 59, 128145.10.1111/ajps.12089CrossRefGoogle Scholar
Matsubayashi, T (2013) Do politicians shape public opinion? British Journal of Political Science 43, 451478.10.1017/S0007123412000373CrossRefGoogle Scholar
Mosca, L and Tronconi, F (2019) Beyond left and right: The eclectic populism of the five star movement. West European Politics 42, 12581283.10.1080/01402382.2019.1596691CrossRefGoogle Scholar
Motta, M (2017) The dynamics and political implications of anti-intellectualism in the United States. American Politics Research 46, 465498.10.1177/1532673X17719507CrossRefGoogle Scholar
Muhammed, TS and Mathew, SK (2022) The disaster of misinformation: A review of research in social media. International Journal of Data Science and Analytics 13, 271285.10.1007/s41060-022-00311-6CrossRefGoogle Scholar
Nyhan, B, Porter, E, Reifler, J and Wood, TJ (2020) Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior 42, 939960.10.1007/s11109-019-09528-xCrossRefGoogle Scholar
Nyhan, B and Reifler, J (2010) When corrections fail: The persistence of political misperceptions. Political Behavior 32, 303330.10.1007/s11109-010-9112-2CrossRefGoogle Scholar
Nyhan, B, Reifler, J and Freed, GL (2014) Effective messages in vaccine promotion: A randomized trial. Pediatrics 133, 835842.10.1542/peds.2013-2365CrossRefGoogle ScholarPubMed
Osmundsen, M, Bor, A, Vahlstrup, PB, Bechman, A and Petersen, MB (2021) Partisan polarization is the primary psychological motivation behind political fake news sharing on twitter. American Political Science Review 115, 9991015.10.1017/S0003055421000290CrossRefGoogle Scholar
Pennycook, G, Epstein, Z, Mosleh, M, Arechar, AA, Eckles, D and Rand, DG (2021) Shifting attention to accuracy can reduce misinformation online. Nature 592, 590595.10.1038/s41586-021-03344-2CrossRefGoogle ScholarPubMed
Porter, E, Velez, Y and Wood, TJ (2023) Correcting COVID-19 vaccine misinformation in 10 countries. Royal Society Open Science 10, 221097.10.1098/rsos.221097CrossRefGoogle ScholarPubMed
Pretus, C, Servin-Barthet, C, Harris, EA, Brady, WJ, Vilarroya, O and Van Bavel, JJ (2023) The role of political devotion in sharing partisan misinformation and resistance to fact-checking. Journal of Experimental Psychology: General 152, 31163134.10.1037/xge0001436CrossRefGoogle ScholarPubMed
Prike, T, Reason, R, Ecker, UKH, Swire-Thompson, B and Lewandowsky, S (2023) Would I lie to you? Party affiliation is more important than Brexit in processing political misinformation. Royal Society Open Science 10, 220508.10.1098/rsos.220508CrossRefGoogle ScholarPubMed
Ratneshwar, S and Chaiken, S (1991) Comprehension's role in persuasion: The case of its moderating effect on the persuasive impact of source cues. Journal of Consumer Research 18, 5262.10.1086/209240CrossRefGoogle Scholar
Reuters Institute for the Study of Journalism (2025) Digital news report 2025: Italy. Available at: https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025/italy (accessed 10 July 2025).Google Scholar
Schmid, P and Betsch, C (2019) Effective strategies for rebutting science denialism in public discussions. Nature Human Behaviour 9, 931939.10.1038/s41562-019-0632-4CrossRefGoogle Scholar
Seo, H, Xiong, A, Lee, S and Lee, D (2022) If you have a reliable source, say something: Effects of correction comments on COVID-19 misinformation. Proceedings of the Sixteenth International AAAI Conference on Web and Social Media 16, 896907.10.1609/icwsm.v16i1.19344CrossRefGoogle Scholar
Shin, H (2024) How does topical diversity affect source credibility? Fact-checking coverage of politics, science, and popular culture. The International Journal of Press/Politics , 124.Google Scholar
Stedtnitz, C (2020) How Motivated Reasoning Leads to Tolerance of False Claims: Three Experimental Tests of Mechanisms. PhD dissertation, University of Essex. Available at: https://repository.essex.ac.uk/27894/1/stedtnitz_dissertation_gov.pdf (accessed 10 March 2025).Google Scholar
Swire-Thompson, B, Berinsky, AJ, Lewandowsky, S and Ecker, UKH (2017) Processing political misinformation: “Comprehending the Trump phenomenon.” Royal Society Open Science 4, 160802.10.1098/rsos.160802CrossRefGoogle Scholar
Swire-Thompson, B, Ecker, UKH, Lewandowsky, S and Berinsky, AJ (2020) They might be a liar but they're my liar: source evaluation and the prevalence of misinformation. Political Psychology 41, 2134.10.1111/pops.12586CrossRefGoogle Scholar
Taber, C and Lodge, M (2006) Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science 50, 755769.10.1111/j.1540-5907.2006.00214.xCrossRefGoogle Scholar
Tajfel, H (1978) Differentiation between Social Groups: Studies in the Social Psychology of Intergroup Relations. London: Academic Press.Google Scholar
Thorson, E (2015) Belief echoes: The persistent effects of corrected misinformation. Political Communication 33, 460480.10.1080/10584609.2015.1102187CrossRefGoogle Scholar
Van Bavel, JJ and Pereira, A (2018) The partisan brain: An identity-based model of political belief. Trends in Cognitive Sciences 22, 213224.10.1016/j.tics.2018.01.004CrossRefGoogle Scholar
Vegetti, F and Mancosu, M (2020) The impact of political sophistication and motivated reasoning on misinformation. Political Communication 37, 678695.10.1080/10584609.2020.1744778CrossRefGoogle Scholar
Vegetti, F and Mancosu, M (2022) Perceived exposure and concern for misinformation in different political contexts: Evidence from 27 European countries. American Behavioral Scientist 69, 131147.10.1177/00027642221118255CrossRefGoogle Scholar
Vraga, EK and Bode, L (2017) Using expert sources to correct health misinformation in social media. Science Communication 39, 621645.10.1177/1075547017731776CrossRefGoogle Scholar
Wack, M, Duskin, K and Hodel, D (2024) Political fact-checking efforts are constrained by deficiencies in coverage, speed, and reach. arXiv pre-print arXiv:2412.13280.Google Scholar
Wagner, M, Tarlov, J and Vivyan, N (2014) Partisan bias in opinion formation on episodes of political controversy: Evidence from Great Britain. Political Studies 62, 136158.10.1111/j.1467-9248.2012.01002.xCrossRefGoogle Scholar
Walter, N, Brooks, JJ, Saucier, CJ and Suresh, S (2021) Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. Health Communication 36, 17761784.10.1080/10410236.2020.1794553CrossRefGoogle ScholarPubMed
Walter, N, Cohen, J, Holbert, RL and Morag, Y (2019) Fact-checking: a meta-analysis of what works and for whom. Political Communication 37, 350375.10.1080/10584609.2019.1668894CrossRefGoogle Scholar
Wood, TJ and Porter, E (2019) The elusive backfire effect: Mass attitudes' steadfast factual adherence. Political Behavior 41, 135163.10.1007/s11109-018-9443-yCrossRefGoogle Scholar
Zaller, JR (1992) The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press.10.1017/CBO9780511818691CrossRefGoogle Scholar
Zhou, Y and Shen, L (2024) Processing of misinformation as motivational and cognitive biases. Frontiers in Psychology 15, 1430953.10.3389/fpsyg.2024.1430953CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Predicted differences with 95% confidence intervals between the control group (dashed line) and the experimental conditions (EC) of the agreement with the statements about the introduction of the minimum wage, the efficacy of the COVID-19 vaccination campaign, and the amount of worked weekly hours.

Figure 1

Figure 2. Predicted differences with 95% confidence intervals between the control group (dashed line) and the experimental conditions (EC) of the agreement with the statements about the introduction of the minimum wage, the efficacy of the COVID-19 vaccination campaign, and the amount of worked weekly hours, conditional on the feeling toward the PD, FDI, and M5S.

Supplementary material: File

Quaranta and Arrigoni supplementary material

Quaranta and Arrigoni supplementary material
Download Quaranta and Arrigoni supplementary material(File)
File 3.2 MB