Hostname: page-component-54dcc4c588-2ptsp Total loading time: 0 Render date: 2025-09-14T01:26:34.904Z Has data issue: false hasContentIssue false

Audi alteram partem: An experiment on selective exposure to information

Published online by Cambridge University Press:  20 August 2025

Salvatore Nunnari*
Affiliation:
Department of Economics, Bocconi University, Milan, ITALY
Giovanni Montanari
Affiliation:
Department of Economics, New York University, New York, NY, USA
*
Corresponding author: Salvatore Nunnari; Email: salvatore.nunnari@unibocconi.it
Rights & Permissions [Opens in a new window]

Abstract

We report the results of an experiment on selective exposure to information. A decision maker interested in learning about an uncertain state of the world can acquire information from one of two sources that have opposite biases: when informed on the state, they report it truthfully; when uninformed, they report their favorite state. A Bayesian decision-maker is better off seeking confirmatory information unless the source biased against the prior is sufficiently more reliable. In line with the theory, subjects are more likely to seek confirmatory information when sources are symmetrically reliable. On the other hand, when sources are asymmetrically reliable, subjects are more likely to consult the more reliable source even when prior beliefs are strongly unbalanced and this source is less informative. Our experiment suggests that base rate neglect and simple heuristics (e.g., listen to the most reliable source) are important drivers of the endogenous acquisition of information.

Information

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Economic Science Association.

1. Introduction

Social scientists have collected ample evidence that people selectively search for and attend to a subset of the available information, ignoring additional evidence (Frey, Reference Frey1986; Iyengar & Hahn, Reference Iyengar and Hahn2009; Nickerson, Reference Nickerson1998). In the words of Berelson and Steiner (Reference Berelson and Steiner1964), ‘people tend to see and hear communications that are favorable or congenial to their predispositions; they are more likely to see and hear congenial communications than neutral or hostile ones.’

This behavior has raised concern: as the availability of media choices has grown, selective exposure to like-minded sources has contributed to a deep partisan divide in news consumption (Barberá et al., Reference Barber´a, John, Jonathan, Joshua and Richard2015; Gentzkow & Shapiro, Reference Gentzkow and Shapiro2011; Lawrence et al., Reference Lawrence, Sides and Farrell2010; Peterson et al., Reference Peterson, Goel and Iyengar2021). In turn, this segregation into ‘echo chambers’ has been associated with the observed intensification of partisan sentiment as well as with the recent surge of populist parties in developed democracies (Bakshy et al., Reference Bakshy, Messing and Adamic2015; Flaxman et al., Reference Flaxman, Goel and Rao2016; Mann & Ornstein, Reference Mann and Ornstein2012). Why do we observe this behavior? Recent theoretical work in economics suggests that individuals might have systematic preferences for information consonant with their beliefs (Mullainathan & Shleifer, Reference Mullainathan and Shleifer2005) or that, being uncertain about the reliability of an information source, they interpret disconfirming evidence as less credible than confirming evidence and turn their attention towards the source they deem more informative (Gentzkow & Shapiro, Reference Gentzkow and Shapiro2006). Notably, even when individuals have no uncertainty about sources reliability and regard all media outlets as equally credible, selective exposure to like-minded sources can be a rational choice for an individual who has limited time or attention and can only access or process a subset of the available evidence.

In this paper, we investigate this last mechanism with a laboratory experiment. In particular, we ask the following research questions: How should an attention-constrained but otherwise rational agent optimally acquire information from multiple potential sources with different biases? What is the ability of this normative model to predict the observed demand for (dis)confirmatory information?

In our model, decision-makers have the possibility to acquire a signal from one of two information sources in order to reduce their uncertainty about a state of the world. Importantly, decision-makers know the conditional distributions of signals for each information source, ruling out any uncertainty about the reliability of information sources. We also provide decision-makers with an exogenous prior belief on the state of the world and focus on an abstract decision environment that allows us to minimize the confounding effects of motivated beliefs. Once decision-makers observe the signal from the information source of choice, they guess the state of the world they deem more likely and receive a reward only for a correct guess. We manipulate the probability distributions of signals delivered by each information source (in order to control their relative reliability) and the prior belief over the state of the world. As a consequence of our manipulations, it is optimal to follow confirmatory information sources in some treatments but not in others. We verify optimal information acquisition in both environments and test for a confirmatory pattern on top of and above what can be explained by rational behavior.

Some predictions of our theory align with observed behavior, while others are not supported by the data. When the two information sources are equally reliable, information acquisition displays a confirmatory pattern, as the source supportive of the prior belief is the most consulted one. This is in line with theoretical predictions. On the other hand, when we manipulate the relative reliability of information sources and make the source less supportive of the prior belief the optimal choice, participants display a disconfirmatory pattern of information acquisition, regardless of the strength of the prior. This contrasts with the predictions of the model, suggesting decision-makers pay undue attention to the reliability of information sources and underweigh the importance of the ex-ante uncertainty surrounding the phenomenon to learn about. This suggests that the adoption of simple heuristics – e.g., listen to the more reliable source – is an important driver of the endogenous acquisition of information.

This paper contributes to two strands of literatures. First, our paper contributes to a literature in experimental psychology on how people gather evidence to test hypotheses (Baron et al., Reference Baron, Beattie and Hershey1988; Klayman & Young-Won, Reference Klayman and Young-Won1987; Skov & Sherman, Reference Skov and Sherman1986; Slowiaczek et al., Reference Slowiaczek, Klayman, Sherman and Skov1992).Footnote 1 Second, our paper contributes to a recent literature in experimental economics on the choice over information sources with instrumental value (Ambuehl, Reference Ambuehl2021; Ambuehl & Shengwu, Reference Ambuehl and Shengwu2018; Castagnetti & Schmacker, Reference Castagnetti and Schmacker2022; Chopra et al., Reference Chopra, Haaland and Roth2024; Duffy et al., Reference Duffy, Hopkins, Kornienko and Mingye2019; Sharma & Castagnetti, Reference Sharma and Castagnetti2023). The most closely related work is Charness et al (Reference Charness, Oprea and Yuksel2021). Similarly to some of our treatments, they consider experimental conditions (labeled bias by commission) where decision-makers choose between two information sources that are biased towards opposite states and might send an incorrect signal with the same probability (that is, they are symmetrically reliable). Contrary to their setting, we investigate experimental treatments where the two available information sources are biased towards opposite states, and might send an incorrect signal with different probabilities (that is, they are asymmetrically reliable).Footnote 2

This distinction is critical because, in many real-life contexts, information sources often differ not only in the direction of their bias but also in the magnitude of their bias. For example, political pollsters with opposing partisan leanings might have symmetric reliability, meaning they are equally likely to make errors when the truth misaligns with their bias, such as overestimating support for their preferred party by the same degree. Similarly, financial advisors with different investment philosophies – one favoring high-risk, high-reward strategies and the other favoring conservative, low-risk options – may have symmetric reliability if they are equally prone to misjudging market conditions that do not align with their preferred strategies. In contrast, media outlets often exhibit asymmetric reliability; for instance, a left-leaning outlet might be highly accurate when reporting on topics aligned with progressive values, like climate change, but less reliable when covering issues that challenge those values, while a right-leaning outlet may show the reverse pattern but at different accuracy levels overall. Medical experts provide another compelling example: a cardiologist might be highly reliable in diagnosing heart-related issues but less so for neurological problems, while a neurologist would display the opposite pattern, with differences in the degree of overall reliability depending on the doctor’s experience. By focusing on asymmetrically reliable sources, this paper introduces an additional layer of complexity that mirrors real-world challenges and broadens our understanding of selective exposure in these nuanced and practically relevant contexts.

Charness et al. (Reference Charness, Oprea and Yuksel2021) conclude that ‘sub-optimal decision rules […] emerge here because it is difficult to correctly reason through information valuation problems, even in our deliberately simple setting.’ Our complementary experimental design allows us to uncover one simple heuristic that individuals rely on in such a complex decision-making environment: when the available sources have different reliability, choose the most trustworthy. If individuals suffer from base-rate neglect (a well documented error in probabilistic reasoning; Bar-Hillel, Reference Bar-Hillel1980; Esponda et al., Reference Esponda, Vespa and Yuksel2024; Kahneman & Tversky, Reference Kahneman and Tversky1973) and are not very responsive to the strength of their prior belief, this simple rule of thumb can also appear normatively appealing.

2. Task and theoretical predictions

Consider a decision-maker (DM) who is uncertain about a state of the world, θ = {B, R}, and has to make a guess, a = {B, R}. The DM earns a reward (normalized to 1) only if this guess matches the state of the world. We denote with π the DM’s prior belief that θ = B. We focus on unbalanced priors and, without loss of generality, we assume π ∈ (1/2, 1), that is, the DM’s prior is that the state is more likely to be B. Before making a guess, the DM acquires a piece of information from one of two information sources, Blue and Red. Each information source stochastically maps the state of the world to a signal s = {b, r}, as described in Table 1.

Table 1. Conditional distribution of signals by information sources

In each panel of Table 1, each cell displays the probability of observing a signal (column) in a specific state of the world (row). We can interpret λ σ as a measure of bias (or as an inverse measure of reliability) of source σ = {B, R}: Blue is biased towards B and λ B represents the probability that it signals the state is B when it is, in fact, R; Red is biased towards R and λ R represents the probability that it signals the state is R when it is, in fact, B. We assume that both sources are somewhat informative but also somewhat biased – that is, λ B, λ R ∈ (0, 1). In line with Gentzkow and Shapiro (Reference Gentzkow and Shapiro2006), this simple framework can capture different real-world scenarios: the information source may be uninformed about the state and report a default signal; it may strategically slant its report when the information it holds is against its favorite state; or its intended signal may inadvertently be distorted.

2.1. Optimal guess for given information source

We characterize the DM’s optimal choice of information source by backward induction. First, we investigate the optimal guess for a given signal received by a given source. Second, we investigate what information source the DM prefers to consult, given the distribution of signals induced by each information source and how the DM will use these signals. In what follows, the notation a $^\star$(s, σ) denotes the optimal guess after observing signal s from information source σ. All proofs are in Appendix A.

Proposition 1 (Optimal Guess if Signal from Blue Source) The DM always follows the signal received from source Blue, that is, a $^\star$(b, Blue) = B and a $^\star$(r, Blue) = R.

Proposition 2 (Optimal Guess if Signal from Red Source) The DM always follows a confirmatory signal received from source Red, that is, a $^\star$(b, Red) = B. The DM follows a contradictory signal received from source Red if and only if the source is sufficiently reliable, that is, a $^\star$(r, Red) = R if λ R < $\frac{1 - \pi}{\pi}$ and a $^\star$(r, Red) = B otherwise.

Remember that the DM’s prior belief favors B. When she observes a signal confirming her prior from either source, the DM’s posterior belief that θ = B is strictly greater than her prior. Thus, in this case, the DM sticks with her prior belief and guesses accordingly. Receiving a signal that disagrees with the source bias – that is, receiving signal b (r) from the Red (Blue) source – is fully revealing: the DM learns the state with certainty, independently of her prior beliefs and the source reliability. Finally, when she observes signal r from Red, the DM’s posterior belief that θ = B is strictly smaller than her prior. In this case, the optimal guess depends on the model parameters: if Red is sufficiently reliable (i.e., λ R is sufficiently small), it is optimal to follow its signal. Otherwise, the DM is better off ignoring the signal altogether and sticking with the guess induced by her prior belief. The relative size of λ R must be gauged against the prior belief: the larger the prior in favor of B, the higher the reliability of Red required by the DM to follow an r signal from this source.

2.2. Optimal choice of information source

First, consider the expected utility from consulting the source biased in favor of the prior, that is, Blue. Since the DM follows any signal received from Blue, acquiring information from this source always improves the confidence the DM has in her guess with respect to a decision made without collecting any additional information. Second, consider the expected utility from consulting the source biased against the prior, that is, Red. When this source is sufficiently biased — that is, when λ R $\frac{1 - \pi }{\pi }$—the DM guesses B regardless of the signal. In this case, acquiring information from this source does not change the confidence the DM has in her guess with respect to a decision made without collecting any additional information. When, instead, this source if sufficiently reliable — that is, when λ R < $\frac{1 - \pi }{\pi }$—the DM follows any signal received from Red and, similarly to Blue, acquiring information from this source always improves the confidence the DM has in her guess.

Since consulting the source biased in favor of the prior is always informative while consulting the source biased against the prior is informative only if λ R < $\frac{1 - \pi}{\pi }$, the DM is better off consulting Blue when λ R $\frac{1 - \pi }{\pi }$ . When λ R < $\frac{1 - \pi }{\pi }$, both sources are informative and the choice involves a trade-off. Intuitively, the DM chooses the source with the smallest probability of misleading signals. If the DM has a perfectly balanced prior, choosing Red over Blue reduces to λ R < λ B. When the prior is unbalanced, the DM has an incentive to choose the information source that is biased towards the prior. She prefers to observe a signal from Red only when this information source is sufficiently more reliable than the other. Proposition 3 summarizes this discussion and characterizes this threshold:

Proposition 3 (Optimal Information Source) The DM acquires information from Red if λ R < $\frac{1 - \pi }{\pi }$ λ B and acquires information from Blue otherwise.

2.3. Summary of testable hypotheses

Below, we summarize the testable hypotheses that we set out to investigate empirically.

  1. H1 When information sources are equally reliable, it is optimal to acquire information from the source biased towards the prior.

  2. H2 When information sources have different reliability and the prior is mildly unbalanced, it is optimal to acquire information from the more reliable source. Conversely, when the prior is strongly unbalanced, it is optimal to acquire information from the source biased towards the prior, even if it is less reliable.

3. Experimental design

The experiment was conducted in 2017 on Prolific with 201 U.S. nationals and residents whose first language was English. Instructions are available in Appendix D.Footnote 3

Setup. The task builds on the classic urn paradigm, which has been extensively used in the experimental literature since Anderson and Holt (Reference Anderson and Holt1997). Subjects are asked to guess the color of a ball randomly drawn from an urn containing only blue and red balls, for a total of 10 balls. One of our experimental manipulations is participants’ prior belief about the state that we control by varying the number of blue and red balls in the urn. We model the information sources as imperfectly informed ‘experts.’ Before making their guess, participants have to consult either the Blue Expert or the Red Expert, randomly extracted from two populations of experts. In each population, a certain fraction of experts is informed about the true color of the extracted ball and issues a truthful report revealing such color. The complementary fraction of experts is uninformed about the color of the extracted ball and always issues the same report.Footnote 4 Both experts can be consulted for free, but participants are forced to choose only one of them. We used the strategy method to elicit participants’ guesses about the color of the ball conditional on the expert’s signal. On the same screen, we elicited their confidence in each of these guesses, on a scale between 0 and 100. We used these statements to construct a measure of observed posterior beliefs.Footnote 5

Rounds. The discussion above describes one round of the experiment. The experiment consists in a sequence of five rounds. In each round, the computer draws the state of the world and the messages sent by the two experts from the same distributions and independently from any past action or outcome. At the end of each round, participants learned the expert’s signal, their relevant choice given the signal, the color of the extracted ball, and their payoff in that round.

Payoffs. On top of earning a fixed amount of $1 for taking part in the experiment, subjects are remunerated with $1 for guessing the color of the ball correctly in a randomly selected round. Since recent research shows that complex incentive schemes systematically bias truthful reporting of beliefs (Danz et al., Reference Danz, Vesterlund and Wilson2022), we stressed the importance of revealing truthful confidence assessments but did not incentivize these statements.

Treatments. We employ a between-subjects design, where we manipulate the prior belief that the ball drawn from the jar is blue, π, and the relative reliability of the two experts, (λ R, λ B). We consider both a mildly and a strongly unbalanced prior, respectively π = 0.6 and π = 0.8. Regarding the sources’ bias, we consider the case where Blue and Red Experts are equally reliable, (λ R, λ B) = (0.5, 0.5), and the case where the Red Expert is more reliable, i.e. (λ R, λ B) = (0.3, 0.7). This leads to four experimental treatments:

  • E6: equal reliability, prior mildly favors ball being blue;

  • E8: equal reliability, prior strongly favors ball being blue;

  • S6: skewed reliability (Red is more reliable), prior mildly favors ball being blue;

  • S8: skewed reliability (Red is more reliable), prior strongly favors ball being blue.

These four treatments have been designed to test the key predictions of the model, as summarized in Section 2: only when the Red Expert is more reliable and when the prior is mildly unbalanced – that is, in treatment S6 – it is optimal to consult the contrarian expert. In all other treatments, it is optimal to consult the supportive expert.

4. Experimental results

Figure 1 shows the percentage of decisions where subjects consulted the Blue Expert – that is, the expert biased in favor of the prior – disaggregated by treatment. When information sources are equally reliable, this happens in 66.3% of decisions with mildly unbalanced priors (treatment E6) and in 70.2% of decisions with strongly unbalanced priors (treatment E8). These proportions are statistically different from 50%, according to one-sample tests of proportions (p-values < 0.001). This behavior is in line with hypothesis H1, as the Blue Expert is always the optimal choice in these environments. When the information source biased against the prior (i.e., the Red Expert) is more reliable, the Blue Expert is chosen in 24% of decisions with mildly unbalanced priors (treatment S6) and in 24.3% of decisions with strongly unbalanced priors (treatment S8). These proportions are statistically different from 50%, according to one-sample tests of proportions (p-values < 0.001).

Fig. 1 Information acquisition by treatment: theory vs. Observed. Notes: The theoretical probabilities are 100% for E8, S8, E6, and 0% for S6. The observed probabilities are 70.19% for E8 (N = 265, SE = 2.81), 24.26% for S8 (N = 235, SE = 2.80), 66.27% for E6 (N = 255, SE = 2.96), and 24.00% for S6 (N = 250, SE = 2.70). The black vertical lines represent 95% confidence intervals.

When comparing outcomes across treatments, we use random-effects logistic regressions, which take into account the panel nature of the data (that is, the fact that the same individual contributes more than one observation to the dataset). The estimates from these regressions are presented in Table 2.

Table 2. Random-effects logistic regressions to estimate ATEs

Notes: Standard errors in parentheses.

* p < 0.05, **p < 0.01, ***p< 0.001.

Keeping the sources’ relative reliability constant (equal or skewed) and manipulating the prior belief about the state from a mildly unbalanced one (0.6) to a strongly unbalanced one (0.8) does not affect the propensity to consult the Blue Expert (the p-value of E6 vs E8 is 0.595; the p-value for S6 vs S8 is 0.763). On the other hand, keeping the prior belief about the state constant (0.6 or 0.8) and manipulating the relative reliability of the sources from equal to being skewed in favor of Red strongly decreases the chance of consulting the Blue Expert: the difference between E6 and S6 (−42.4%) and the difference between E8 and S8 (−45.8%) are both statistically significant at the 1% level (p-values < 0.0001). This highlights that relative reliability trumps the importance of the prior in subjects’ considerations. The regression estimated in the last column of Table 2 confirms that, contrary to the theoretical predictions, subjects are equally sensitive to sources’ reliability in treatments with a mildly unbalanced prior and in treatments with a strongly unbalanced prior.Footnote 6 Findings 1 and 2 below summarize this discussion.

Finding 1. When information sources are equally reliable, subjects are more likely to acquire information from the source biased towards the prior, which is the optimal choice. This behavior is in line with hypothesis H1.

Finding 2. When the source biased against the prior is more reliable, subjects are more likely to acquire information from the more reliable source, regardless of the prior and whether this is the optimal choice. This behavior is in contrast with hypothesis H2.

Even when subjects are more likely to choose the optimal source of information (in treatments E6, E8, and S6), they are prone to mistakes: when information sources are equally reliable, they listen too often to the expert biased against the prior (33.6% of decisions in E6 and 29.8% of decisions in E8); when the Red Expert is more reliable and the uncertainty on the state is sufficiently strong, they listen too often to the expert biased in favor of the prior (24% of decisions in S6). Mistakes are, of course, even more frequent when subjects are more likely to consult the less informative expert (in treatment S8, when this happens in 75.7% of decisions).

Figure 2 presents descriptive statistics for participants’ guesses about the state of the world by treatment and information set. Regardless of the treatment, the vast majority of participants (that is, between 91.7% and 100%) uses the available information optimally and guesses Blue when either expert says Blue. Participants are more reluctant to guess Blue when either expert says Red, but this is only partially due to Bayesian thinking: when the optimal guess is indeed Red (that is, in all treatments when consulting the Blue expert and in treatments with a mildly unbalanced prior when consulting the Red expert), they guess Blue between 14.5% (E8 and Blue expert) and 48.8% of the time (E6 and Red expert). When the optimal guess is Blue, they do so only between 32% and 53.2% of the time.

Fig. 2 Guess on the state by treatment and information set: theory vs. Observed data.

To quantify the cost of these mistakes (at both the information acquisition and information processing stages), Table 3 reports the average guessing accuracy improvement over the prior – that is, the change in the probability of correctly guessing the state relative to simply following the prior – disaggregated by treatment. We compare this with two benchmarks: the guessing accuracy improvement by hypothetical subjects who choose the same information source as actual subjects but process the information as Bayesian learners; and the guessing accuracy improvement by hypothetical subjects who choose the optimal information source and process the information as Bayesian learners.Footnote 7

Table 3. Average guessing accuracy improvement over prior by treatment

Notes: Since in all treatments the prior is that θ is more likely to be B, the counterfactual probability of correctly guessing θ without any additional information is given by the empirical frequency of θ = B. Thus, in columns 3−5, we compute the average guessing accuracy improvement as the difference between the empirical frequency of a correct guess (in three different scenarios) and the empirical frequency of θ = B. The experimental software generates a state of the world and a signal for each source (independently for each participant and each round) before the participant chooses the source. This allows us to construct the counterfactual in the last column (where, if the participant chose the suboptimal source, we use the signal from the other source, unobserved by the participant but available in our dataset). We report standard errors in parentheses.

Finding 3. Subjects improve guessing accuracy less than they could in all treatments. Indeed, when experts have asymmetric reliability and the prior is strongly unbalanced, subjects make worse guesses than they would simply following their priors.

This result is due, in part, to subjects making sub-optimal use of the information provided by experts (regardless of whether the chosen information source was optimal or not): the improvement in average accuracy that could be obtained without changing information source but adopting Bayesian inference ranges between 2.8% (in treatment E6) to 12.4% (in treatment S6). At the same time, choosing a suboptimal information source also has a cost in terms of guessing accuracy, especially in treatments S8 and E6.

In order to shed light on the motives underlying information acquisition, Appendix B investigates how subjects use the advice received by the information source of choice.Footnote 8 We find that, as predicted, subjects are deferential to confirmatory advice. On the other hand, subjects follow contradictory advice sub-optimally: they are excessively skeptic of contradictory advice by the source biased towards the prior and excessively trusting of contradictory advice by the source biased against the prior. This pattern is particularly pronounced in treatment S8, where the most substantial mistakes are observed.

To understand this deviation from the Bayesian benchmark, in the same Appendix, we analyze posterior beliefs about the state of the world. With the exception of the case where learning is easiest (that is, when the prior is strongly unbalanced and the Blue expert says b), observed posterior beliefs are different from those of a Bayesian learner: the change with respect to the prior is excessive when advice is in line with the source bias and insufficient when advice is against the source bias. For example, participants often fail to act on the fully revealing r signal from the Blue source. We conclude that subjects are excessively responsive to information aligned with a source bias and insufficiently responsive to information misaligned with a source bias (which, in fact, perfectly reveals the state of the world).

Overall, these findings suggest that participants’ behavior can be explained by three key drivers: base rate neglect, a reliability heuristic, and a certainty seeking heuristic. First, participants appear to underweight the prior probability of the state of the world when deciding which source to consult. For example, even in treatments where the prior is strongly unbalanced, participants often choose the more reliable source, ignoring the fact that the prior heavily favors one state. This behavior is consistent with base rate neglect, a well documented cognitive bias in probabilistic reasoning, where individuals fail to adequately incorporate prior probabilities into their judgments.

Second, participants’ preference for the more reliable source in treatments with asymmetric reliability highlights the use of a simple heuristic: “choose the most reliable source.”

While this heuristic simplifies decision-making, it leads to suboptimal information acquisition in scenarios where the less reliable source is optimal given the prior. Third, ambiguity aversion provides another possible explanation for participants’ behavior, particularly in their preference for more reliable sources. Ambiguity-averse individuals might seek to maximize their chances of receiving a signal that identifies the states with certainty, and a more reliable source is indeed more likely to give advice misaligned with its bias, an unambiguous signal that completely removes any uncertainty about the state of the world. The chance that the Red experts says blue is increasing in both the Red experts’ reliability and in the prior belief that the state is B (while the chance that the Blue expert says red is decreasing in this belief), making the Red expert particularly appealing for certainty-seeking individuals in treatment S8, the treatment where we observe the largest incidence of mistakes. At the same time, ambiguity aversion alone cannot account for all observed patterns: participants’ underreaction to fully revealing information, as revealed both by their guesses and their posterior beliefs about the state of the world, contradicts the certainty-seeking heuristic (‘choose the information structure most likely to give unambiguous signals’) associated with ambiguity aversion.

These behavioral tendencies – base rate neglect and reliance on heuristics – have important implications for understanding how individuals navigate complex environments with conflicting and biased information sources. They suggest that decision-makers prioritize cues like reliability and clarity over statistical optimality, especially when faced with asymmetry in source trustworthiness. Such patterns mirror real-world challenges where individuals often rely on heuristics or biases to simplify decisions, sometimes at the expense of optimal outcomes.

5. Conclusion

This paper formalized a model of selective exposure based on Bayesian updating, and tested its predictions through a laboratory experiment. We ask two research questions: when is it rational to seek (dis)confirmatory information? Do people behave according to rationality or do we need to impose additional structures? Overall, our experiment suggests that explaining selective exposure to information sources with Bayesian inference has some limitations: in line with Bayesian learning, we do observe confirmatory patterns in the selection of information when sources are equally reliable; at the same time, these trends switch to disconfirmatory attitudes as soon as the source biased against the prior becomes more reliable, with no role for the strength of prior beliefs. We see many possible directions for future research: while we study the simplest possible setup to investigate selective exposure to information sources, it would be interesting to investigate more complex environments where decision-makers have the opportunity to collect multiple pieces of information from sources, or must pay a (possibly heterogeneous) price to receive messages from a source.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/esa.2025.8.

Statements and Declarations

The authors declare that they have no relevant or material financial interests that relate to the research described in this paper.

Footnotes

We are grateful to Gary Charness and audiences at New York University and the 2020 Global Online Meeting of the Economic Science Association for helpful comments. Nunnari gratefully acknowledges financial support from the European Research Council through ERC Grant 852,526 (POPULIZATION).

1 Testing an hypothesis means checking whether a statement of the form ‘p implies q’ is true. Logically, one can test the same hypothesis by checking whether a statement of the form ‘not q implies not p’ is true. This means that, in this context, it is difficult to define what it means for information to be confirmatory or contradictory. Our experiment is not designed to test the ability to construct a logical test but rather the endogenous acquisition of an informative signal.

2 Charness et al. (Reference Charness, Oprea and Yuksel2021) also consider treatments where the two information sources are asymmetrically reliable but biased towards the same state and, thus, there is no trade-off between reliability and direction of the bias (in fact, the two experts can easily be ranked by Blackwell ordering); and treatments where the two information sources are biased towards opposite states and might fail to send a signal (labeled bias by omission). In their design, it is the nature of the bias (commission versus omission) to determine whether it is optimal to consult information sources biased towards or against prior beliefs. In contrast, we achieve this goal by keeping the nature of the bias fixed and varying the sources’ relative realiability.

3 Instructions were followed by three multiple-choice questions to verify that participants understand the details of the experiment. After answering each of these questions, subjects see a commented feedback page with the correct answers and a further explanation of the reasoning leading to the correct answer. Appendix C reports observed behavior in the subsamples determined by the number of questions answered correctly in the comprehension quiz. In addition, participants were required to spend a minimum amount of time on each page of the instructions and could not continue to the following page until a specified amount of time (ranging from 30 to 60 seconds) had elapsed.

4 While this implementation eases participants’ understanding of random variables, participants essentially face two Blue and two Red sources – one accurate and one uninformative. This implementation is equivalent to our theoretical framework for expected utility maximizers, but it may not be neutral for individuals who evaluate uncertain prospects differently. These potential effects are discussed in Section 4.

5 We mapped a confidence of 0 – that is, ‘I think it is just as likely that I am right or wrong’ – to a posterior belief of 0.5 (i.e., indifference between guessing blue and guessing red) and a confidence of 100 – that is, ‘I think I am sure my guess is correct’ – to a posterior of 1 (i.e., almost certainty in the choice). Intermediate levels of confidence were mapped proportionally to intermediate posteriors between 0.5 and 1.

6 The p-value for the coefficient interacting Skewed Reliability with Strongly Unbalanced Prior is 0.585.

7 The theoretical accuracy improvements over the prior when choosing the optimal source and updating beliefs as a Bayesian learner (which coincide with the empirical ones only in the limit as the sample size grows larger) are + 10% for E8, + 6% for S8, + 20% for E6 and + 22% for S6.

8 We must note that interpreting these results is complicated, at least in part, by self selection, as subjects choose their information source.

References

Ambuehl, S. (2021). Can incentives cause harm? Tests of undue inducement. Unpublished Manuscript.Google Scholar
Ambuehl, S., & Shengwu, L. (2018). Belief updating and the demand for information. Games and Economic Behavior, 109, 2139.10.1016/j.geb.2017.11.009CrossRefGoogle Scholar
Anderson, L. R., & Holt, C. A. (1997). Information cascades in the laboratory. American Economic Review, 87(5), 847862.Google Scholar
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on facebook. Science, 348(6239), 11301132.10.1126/science.aaa1160CrossRefGoogle ScholarPubMed
Barber´a, P., John, T. J., Jonathan, N., Joshua, A. T., & Richard, B. (2015). Tweeting from left to right: Is online political communication more than an echo chamber? Psychological Science, 26(10), 15311542.10.1177/0956797615594620CrossRefGoogle ScholarPubMed
Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44(3), 211233.10.1016/0001-6918(80)90046-3CrossRefGoogle Scholar
Baron, J., Beattie, J., & Hershey, J. C. (1988). Heuristics and biases in diagnostic reasoning II: Congruence, information, and certainty. Organizational Behavior and Human Decision Processes, 42(1), 88110.10.1016/0749-5978(88)90021-0CrossRefGoogle Scholar
Berelson, B., & Steiner, G. A. (1964). Human behavior: An inventory of scientific findings. New York: Harcourt, Brace & World.Google Scholar
Castagnetti, A., & Schmacker, R. (2022). Protecting the ego: Motivated information selection and updating. European Economic Review, 142, 104007.10.1016/j.euroecorev.2021.104007CrossRefGoogle Scholar
Charness, G., Oprea, R., & Yuksel, S. (2021). How do people choose between biased information sources? Evidence from a laboratory experiment. Journal of the European Economic Association, 19(3), 16561691.10.1093/jeea/jvaa051CrossRefGoogle Scholar
Chopra, F., Haaland, I., & Roth, C. (2024). The demand for news: accuracy concerns versus belief confirmation motives. The Economic Journal, 134(661), 18061834.10.1093/ej/ueae019CrossRefGoogle Scholar
Danz, D., Vesterlund, L., & Wilson, A. J. (2022). Belief elicitation and behavioral incentive compatibility. American Economic Review, 112(9), 28512883.10.1257/aer.20201248CrossRefGoogle Scholar
Duffy, J., Hopkins, E., Kornienko, T., & Mingye, M. (2019). Information choice in a social learning experiment. Games and Economic Behavior, 118, 295315.10.1016/j.geb.2019.06.008CrossRefGoogle Scholar
Esponda, I., Vespa, E., & Yuksel, S. (2024). Mental models and learning: The case of base-rate neglect. American Economic Review, 114(3), 752782.10.1257/aer.20201004CrossRefGoogle Scholar
Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(S1), 298320.10.1093/poq/nfw006CrossRefGoogle Scholar
Frey, D. (1986). Recent research on selective exposure to information. Advances in ExperImental Social Psychology, 19, 4180.10.1016/S0065-2601(08)60212-9CrossRefGoogle Scholar
Gentzkow, M., & Shapiro, J. M. (2006). Media bias and reputation. Journal of Political Economy, 114(2), 280316.10.1086/499414CrossRefGoogle Scholar
Gentzkow, M., & Shapiro, J. M. (2011). Ideological segregation online and offline. Quarterly Journal of Economics, 126(4), 17991839.10.1093/qje/qjr044CrossRefGoogle Scholar
Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication, 59(1), 1939.10.1111/j.1460-2466.2008.01402.xCrossRefGoogle Scholar
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(4), 237.10.1037/h0034747CrossRefGoogle Scholar
Klayman, J., & Young-Won, H. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94(2), 211.10.1037/0033-295X.94.2.211CrossRefGoogle Scholar
Lawrence, E., Sides, J., & Farrell, H. (2010). Self-segregation or deliberation? Blog readership, participation, and polarization in American politics. Perspectives on Politics, 8(1), 141157.10.1017/S1537592709992714CrossRefGoogle Scholar
Mann, T. E., & Ornstein, N. J. (2012). It’s even worse than it looks: How the American constitutional system collided with the new politics of extremism. New York, NY: Basic Books.Google Scholar
Mullainathan, S., & Shleifer, A. (2005). The market for news. American Economic Review, 95(4), 10311053.10.1257/0002828054825619CrossRefGoogle Scholar
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175.10.1037/1089-2680.2.2.175CrossRefGoogle Scholar
Peterson, E., Goel, S., & Iyengar, S. (2021). Partisan selective exposure in online news consumption: Evidence from the 2016 presidential campaign. Political Science Research and Methods, 9(2), 242258.10.1017/psrm.2019.55CrossRefGoogle Scholar
Sharma, K., & Castagnetti, A. (2023). Demand for information by gender: An experimental study. Journal of Economic Behavior & Organization, 207, 172202.10.1016/j.jebo.2022.12.012CrossRefGoogle Scholar
Skov, R. B., & Sherman, S. J. (1986). Information-gathering processes: Diagnosticity, hypothesis-confirmatory strategies, and perceived hypothesis confirmation. Journal of Experimental Social Psychology, 22(2), 93121.10.1016/0022-1031(86)90031-4CrossRefGoogle Scholar
Slowiaczek, L. M., Klayman, J., Sherman, S. J., & Skov, R. B. (1992). Information selection and use in hypothesis testing: What is a good question, and what is a good answer? Memory & Cognition, 20(4), 392405.10.3758/BF03210923CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Conditional distribution of signals by information sources

Figure 1

Fig. 1 Information acquisition by treatment: theory vs. Observed. Notes: The theoretical probabilities are 100% for E8, S8, E6, and 0% for S6. The observed probabilities are 70.19% for E8 (N = 265, SE = 2.81), 24.26% for S8 (N = 235, SE = 2.80), 66.27% for E6 (N = 255, SE = 2.96), and 24.00% for S6 (N = 250, SE = 2.70). The black vertical lines represent 95% confidence intervals.

Figure 2

Table 2. Random-effects logistic regressions to estimate ATEs

Figure 3

Fig. 2 Guess on the state by treatment and information set: theory vs. Observed data.

Figure 4

Table 3. Average guessing accuracy improvement over prior by treatment

Supplementary material: File

Nunnari and Montanari supplementary material 1

Nunnari and Montanari supplementary material
Download Nunnari and Montanari supplementary material 1(File)
File 71.9 KB
Supplementary material: File

Nunnari and Montanari supplementary material 2

Nunnari and Montanari supplementary material
Download Nunnari and Montanari supplementary material 2(File)
File 203.2 KB
Supplementary material: File

Nunnari and Montanari supplementary material 3

Nunnari and Montanari supplementary material
Download Nunnari and Montanari supplementary material 3(File)
File 145.1 KB