Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-01-26T22:36:41.483Z Has data issue: false hasContentIssue false

Why the marketplace of ideas needs more markets

Published online by Cambridge University Press:  23 January 2025

Bartlomiej Chomanski*
Affiliation:
Department of Philosophy, Adam Mickiewicz University, Poznan, Poland
Rights & Permissions [Opens in a new window]

Abstract

It is frequently argued that false and misleading claims, spread primarily on social media, are a serious problem in need of urgent response. Current strategies to address the problem – relying on fact-checks, source labeling, limits on the visibility of certain claims, and, ultimately, content removals – face two serious shortcomings: they are ineffective and biased. Consequently, it is reasonable to want to seek alternatives. This paper provides one: to address the problems with misinformation, social media platforms should abandon third-party fact-checks and rely instead on user-driven prediction markets. This solution is likely less biased and more effective than currently implemented alternatives and, therefore, constitutes a superior way of tackling misinformation.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

Philosophers, social scientists, political commentators, and policymakers frequently bemoan the prevalence of false and misleading content on social media (we may call it “social media’s information problem” or SMIP)Footnote 1 . There is a serious worry that the flood of falsehoods will undermine the quality of (political) discussion online, enhance polarization, and contribute to harmful real-world outcomes, from tilting election results, through weakening support for public health measures, to precipitating political violence.

This paper aims to outline a new solution to SMIP. I begin by introducing the problem, and some currently popular attempts to solve it, with a special focus on Meta’s anti-misinformation strategy. I then criticize such solutions for their likely bias, ineffectiveness, and failure to address the underlying cause of SMIP: user incentives.

I follow this critique with a positive proposal: alter the incentives by rewarding accuracy through the integration of prediction markets with social media platforms. Prediction markets offer monetary rewards to participants (“traders”) for making correct predictions about some future outcome (unlike stock markets, in prediction markets, the relevant outcomes extend far beyond future stock prices). Insofar as the most reliable way to increase one’s chances of making correct predictions depends on collecting and rationally assessing the relevant evidence, the rewards offered by prediction markets incentivize being well-informed and making reasonable, unbiased inferences about information one possesses – the sorts of epistemic practices that extant solutions to SMIP aim to encourage. I conclude by arguing that, though it’s far from certain, this approach has enough of a chance of improving the status quo that it is worth testing in the real world.

1.1. The misinformation problem

I start by breaking SMIP down into four schematic theses:

  • OVERSUPPLY: as far as political discourse is concerned, social media is awash with information that is false and misleading (hereafter, “misinformation”); there is simply too much misinformation available online, especially on social media.

  • OVERCONSUMPTION: online misinformation is too popular. Too many people want to, and in fact do, consume it, and too many people believe it.

  • ON-PLATFORM INFLUENCE: false information exerts a concerning degree of influence on the structure of discussion on social media platforms themselves; it destroys political discourse online, by moving it farther away from the deliberative ideal of a public square, e.g. by contributing to growing (online) polarization and the emergence of echo chambers.

  • OFF-PLATFORM INFLUENCE: people seem to make a range of real-world political decisions at least partly on the basis of false claims they see online; this adversely affects real-world political outcomes.

Commitment to SMIP appears to require commitment to the conjunction of these theses. Various authors concerned with SMIP will embrace them to varying degrees, filling out the details and the causal connections in more specific ways. For our present purposes, however, it is enough to assume the truthFootnote 2 of SMIP in the general, vague formulation above for the sake of argument.

To the extent that SMIP is a genuine problem, it should be addressed – we should want to prevent both kinds of harms that misinformation appears to contribute to. This belief is widely shared among scholars, policymakers, and tech leaders, and, consequently, social media platforms have undertaken considerable efforts to combat SMIP.

The usual remedies have so far primarily consisted of attempts to prevent people from accessing misinformation (by means of content removals and restrictions on how much it can be shared, thus targeting OVERSUPPLYFootnote 3 ) or to dissuade them from believing misinformation (by means of adding fact-checking and source reliability labels, and the increasingly popular “prebunking” – providing users with counter-arguments against misinformation before they access questionable content) – thus seeking to combat OVERCONSUMPTION.Footnote 4 Importantly, fact-checkers’ negative verdicts concerning a piece of content frequently form the basis for measures such as restrictions on its spread or even its complete removal from the platform. Thus, fact-checking is one of the main drivers of both prevention and dissuasion. (Of course, there are other bases for content removal, such as content constituting hate speech or harassment, but these fall outside the scope of this paper.)Footnote 5

Regardless of detail, solutions to SMIP that rely on fact-checking to pursue dissuasion and prevention must overcome three challenges: first, the selection challenge: deciding which content is an apt target for a fact-checking investigation; second, the assessment challenge: investigating and pronouncing on the truth/falsity of the alleged misinformation; third, the response challenge: selecting which of the previously identified misinformation to intervene on, and in what way (from complete prevention to mere dissuasion). All of these decisions can (but don’t always have to) involve a substantial degree of discretion.

Conveniently, some social media companies wear their anti-misinformation strategies on their sleeve, explaining in some detail the steps they take to combat SMIP on their platforms.

Meta’s own website (2021), for instance, provides a fairly thorough summary of the company’s stated motivation for relying on “independent third-party fact-checkers” – “[t]o fight the spread of misinformation and provide people with more reliable information” (ibid., np.) – as well as the process through which information is selected, rated, and dealt with on the basis of the fact-checkers’ verdicts.

The process, as Meta outlines it, has three stages: first, the potential targets for a fact-checking investigation are selected by a variety of methods, from crowdsourcing, through algorithmic detection, to the independent decisions by the fact-checkers:

Fact-checkers can identify hoaxes based on their own reporting, and Meta also surfaces potential misinformation to fact-checkers using signals, such as feedback from our community or similarity detection. Our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding and how fast the content is spreading (ibid.).

Once the content is supplied to them, “[f]act-checkers review and rate the accuracy of stories through original reporting, which may include interviewing primary sources, consulting public data and conducting analyses of media, including photos and video” (ibid.). The company has no involvement in that stage of the process.

Thirdly, Meta retains substantially more discretion over the response: the company decides what will happen to the content identified as misinformation, though the choice seems to be limited to two options: whether to remove it entirely or merely label and throttle it. As the company’s website puts it,

Each time a fact-checker rates a piece of content as false, we significantly reduce the content’s distribution so that fewer people see it. We [then] apply a warning label that links to the fact-checker’s article, disproving the claim with original reporting. (ibid.)

Meta thus outsources the assessment challenge entirely to a group of fact-checking organizations, all of which hold appropriate credentials in the form of certification from Poynter Institute (ibid.), while lending more of a helping hand in selection, and leaving response entirely to itself.

In what follows, I will take Meta’s approach as paradigmatic, in part because it applies to a number of extremely popular social media platforms (including, of course, Facebook), and in part because it is similar to the way other digital giants, such as Alphabet, do things. How Meta conducts fact-checking has, thus, enormous influence on online discourse globally.

2. Assessing the status quo

I take the evaluation of solutions to SMIP to revolve around two questions: the question of effectiveness (do dissuasion techniques actually change minds? Do prevention techniques actually prevent people from consuming misinformation?); and the question of bias (to what extent is the selection and response influenced by non-epistemic considerations, such as the political slant of some piece of misinformation?).Footnote 6

A solution to SMIP is effective insofar as it reduces belief in misinformation without at the same time reducing belief in true information.

A solution to SMIP is (politically) unbiased when it evaluates both the truth (more specifically, the quality of evidential support) and the importance (i.e. whether it merits selection for investigation in the first place) of a claim in the same way regardless of its political alignment.

The optimal solution to SMIP would combine as high effectiveness as possible with as little bias as possible. It would reliably reduce belief in misinformation, while treating epistemically similar cases alike, regardless of the political ideology they seem to support.

There is enough evidence of both prevalent bias and limited effectiveness to justify the view that Meta’s solution to SMIP is, at most, second-best. While the evidence does not settle the case, it is suggestive enough to make a search for viable alternative solutions to SMIP reasonable. Generally, a better solution would improve both bias and effectiveness simultaneously.

I will now outline reasons to think that the existing solutions to SMIP, pursued by the likes of Meta, are both biased and ineffective. I do not intend what follows as knockdown arguments for the conclusion that bias and ineffectiveness definitely plague Meta’s solution. Rather, I intend to show that these concerns are serious enough to motivate an exploration of possible alternatives.

2.1. Bias

Two theoretical perspectives support the claim that fact-checking in the real world is likely to be biased. First, as Joseph Uscinski and Ryden Butler (Reference Uscinski and Butler2013) argue, fact-checkers’ political ideology is likely to induce them to view epistemically similar claims differently depending on their political alignment (especially when it comes to the selection challenge). As they put it, fact-checkers

often use political ideology to interpret an otherwise chaotic political world …. The cognitive function of the ideology might make it impossible to produce an ideologically unbiased sample of statements [to be fact-checked], regardless of the intention and professionalism of the fact checker. Even though ideological bias may be unconscious and unintended, it can drive case selection: Issues deemed important by a fact checker’s ideology might receive more attention, and statements incongruent with a fact checker’s view of how the world works may be subject to more scrutiny. This is the case even if the fact checker asks herself which statements her readers might find “interesting.” … the fact checker must rely on her gut feeling about what is interesting, and the cognitive function of ideology is precisely to highlight some facts as more interesting than others—because they represent aspects of reality that are made pertinent by the ideology (166, emphasis added)

On this view, judgments concerning the importance of some statement – judgments driving selection – will necessarily be colored by non-epistemic values. For people for whom political ideology is important, it will constitute a major source of such values.

But it needn’t be the only one. The second perspective that predicts fact-checker bias comes from Roger Koppl’s (Reference Koppl2018) more general “economic theory of experts” (whom he defines as people paid for their opinion – a definition that fits the purposes of this paper, especially as far as Meta’s fact-checkers are concerned, given that the company spends substantial amounts on grants for fact-checking organizations). Koppl’s theory of experts is a deliberate parallel to a more established public choice model of political behavior, applying the toolbox of economic analysis to understand how experts act.

The theory is based on the assumption that experts seek to maximize utility. The assumption is relatively straightforward, though Koppl is careful to stress that expert utility functions are quite capacious. They include not just the (selfless) pursuit of truth, and not just the (selfish) pursuit of material goods, but a variety of other motivations, such as identification with the profession, sympathy for the client, the desire not to antagonize others, and so on.

Combined with the observation that expert cognition is limited and so their error is inevitable, Koppl argues that expert errors (both honest mistakes and deliberate deception) will be skewed by their incentives, rather than distributed randomly. They will favor some parties and perspectives at the expense of others, irrespective of actual truthfulness.

To substantiate this claim, Koppl surveys a wide-ranging literature on cognitive biases, highlighting findings of observer effects (what people notice is a partial function of their expectations and hopes), role effects (what people notice is a partial function of whose perspective – role – they adopt when solving problems), conformity effects (what people notice is a partial function of what they believe others are noticing), and the better-known biases such as anchoring. He argues that such biases influence how the experts process information, leading them not only to make different decisions than what a purely impartial estimation of evidence would warrant but decisions that systematically favor certain viewpoints and perspectives more than others, regardless of epistemic merit.

For example, experts’ sympathy with their clients could lead them to view things from the client’s perspective, which could precipitate role effects and consequently cause the experts to miss, ignore, or underweigh aspects of the situation detrimental to the client’s interests. An expert’s identification with the standards and ethos of their profession could lead to conformity effects, which, in turn, could induce them to give less credence than is warranted to claims they think many of their peers would disagree with. An expert’s expectation that her political opponents are unscrupulous liars could lead her to interpret ambiguities in the opponents’ claims to their disfavor, while she would not do the same for people on her side.

Applying Koppl’s framework specifically to fact-checkers, we arrive at the following picture: fact-checkers, qua experts, are motivated not just by an uncompromising pursuit of truth, but also by a variety of more and less selfish motives; fact-checkers are fallible and make errors; their motivations can (consciously and unconsciously) affect which content they fact-check, how they fact-check it, and how they determine the final rating. The ratings thus depend not just on the epistemic credentials of the content (how well is it supported by the evidence?), but also on who the fact-checkers’ clients are (and what the experts believe their clients’ preferences are), who the fact-checkers identify with in terms of their professional and other associations, and who they seek approbation from.

Placing Uscinski and Ryder’s insights within Koppl’s framework amounts to adding the explicit consideration of ideology as a further component of the fact-checkers’ utility function. The ideology espoused by fact-checkers could then, by the mechanisms outlined by Koppl, skew the distribution of honest errors in fact-checking verdicts in a way that favors claims and perspectives ideologically aligned with the fact-checkers.

2.2. Monopoly and monopsony

If platforms employed a variety of ideologically diverse fact-checkers, we would have some reason to take their necessarily biased nature more lightly. Competition disciplines and fact-checkers with superior track records would grow in reputation and following. On the other hand, the concern with bias acquires more urgency when we realize that one feature of social media platforms’ attempts to solve their misinformation problem is (what one might call) the monopolization of epistemic authority. Only a small number of organizations (“epistemic monopolists”) are invested with the official power to determine whether any piece of content should be considered misinformation. They alone decide what merits the official sanctioning as false, misleading, etc., and they face no opposition equipped with similar power.

Even if, as in Meta’s case, the platform employs multiple organizations involved in fact-checking, they share the same client, the same or similar identity (as fact-checkers, journalists, etc.), and the same desires for approbation, especially within their peer group. They also seem ideologically similar, in that the majority of fact-checking organizations exhibit left-wing bias (AllSides, 2023). More importantly, they don’t seem to compete among themselves (e.g. by providing different fact-checks of the same information, leaving the decision on which determination to accept to the client; rather, they tend each to respond to different kinds of content, serving different languages and world regions). Consequently, it is still appropriate to label them epistemic monopolists, though admittedly it stretches the concept a little. (Perhaps they would be better described as an epistemic cartel).

As Koppl (Reference Koppl2018; Reference Koppl2023) finds, monopoly expertise, particularly in fields where evidence is often unequivocal, is especially susceptible to influences that extend beyond the analysis of the evidential basis for any particular claim, including the expert’s non-epistemic preferences and biases. In a monopoly, such biases will not be counteracted by any competing experts; thus, the costs of indulging them are much reduced. This is likely to lead to the information ecosystem of the whole platform being skewed in directions favorable to the fact-checkers’ non-epistemic values. Thus, fact-checker bias, under conditions of cartelization or monopoly, results in the entire within-platform information environment being biased.

A further problem for Meta’s solution to SMIP is that the company is the only buyer of fact-checker services for its platforms. Thus, in addition to problems arising from the epistemic monopoly, we must also be wary of Meta’s monopsony power. As Koppl puts it,

monopsony increases the likelihood of expert failure [i.e. the provision of incorrect expert opinions]. Monopsony … makes even nominally competing experts dependent on the monopsonist and correspondingly unwilling to provide opinions that might be contrary to the monopsonist’s interests or wishes (Reference Koppl2023, 109).

Similarly, fact-checking organizations used by Meta are likely to be reluctant to issue verdicts they will see as opposed to what their client wants and expects, regardless of the purely epistemic merits of any given claim. After all, they depend on the company for a significant portion of their funding. As is the case with other major tech firms, Meta’s employees (especially its top-level executives) tend overwhelmingly to support Democratic candidates (Levy, Reference Levy2020). Since this is public knowledge, fact-checkers employed by Meta might be unwilling to treat pro- and anti-DemocraticFootnote 7 perspectives in the same way, given that they know which direction the buyer of the services is biased in.

Though I have struggled to find relevant hard data, the predictions generated from Uscinski & Butler’s and Koppl’s models bear out insofar as plentiful anecdotal evidence is concerned (for first- and secondhand accounts see e.g; Magness & Yang, Reference Magness and Yang2022; Fritts & Cabrera, Reference Fritts and Cabrera2022; Friedman, Reference Friedman2023; Houghton, Reference Houghton2024 ; for a general discussion, see Gable et al., Reference Gable, Brechter, DePoy, Mastrine and Weinzierl2022).

In line with Koppl’s complaints about expert monopoly and client monopsony in general, the lack of competition among fact-checkers, and their all working for Meta, creates an institutional structure where bias is harder to counteract and more influential on political discussions on the entire platform. The prevalence of such whole-of-platform skew, absent contrary influences, has serious consequences.

For example, if (as is reasonable to suppose) viewpoint diversity has epistemic benefits (see McBrayer Reference McBrayer2024), then the institution of epistemic monopolists and the consequent imposition of some form of viewpoint uniformity (or at least a move towards more viewpoint uniformity on the margin) will prevent (or make more difficult on the margin) the acquisition of these benefits. In other words, since an expert monopoly will exclude some perspectives for non-epistemic reasons, it will hinder the search for truth.

Consequently, it is highly probable that real-world anti-misinformation strategies instituted by some of the largest social media platforms will fail in systematic ways, yielding to the epistemic monopolists’ (and monopsonists’) biases in assessing accuracy of various claims, generating epistemic costs.

Overall, the above considerations make worries about fact-checker bias, especially under monopoly/monopsony conditions, eminently reasonable.

2.3. Ineffectiveness

Substantial empirical research suggests that labeling content as false or misleading – the main strategy in fact pursued by Meta to combat misinformation – can boast limited real-world effectiveness. The intervention appears largely unable to change people’s minds.

A meta-analysis of research on fact-checking offers this sobering summary:

the effects of fact-checking on beliefs are quite weak and gradually become negligible the more the study design resembles a real-world scenario of exposure to fact-checking. For instance, though fact-checking can be used to strengthen preexisting convictions, its credentials as a method to correct misinformation (i.e., counterattitudinal fact-checking) are significantly limited. (Walter et al., Reference Walter, Cohen, Holbert and Morag2019, 18).Footnote 8

In fairness, other researchers find robust positive effects of fact-checking on belief in false information (Wood & Porter, Reference Wood and Porter2019). And, to complicate matters, some studies seem to show that interventions such as fact-checks decrease belief in false and true information alike (Hoes et al., Reference Hoes, Aitken, Zhang, Gackowski and Wojcieszak2024). Yet, though the jury is still out, the balance of the evidence, for now, tilts toward ineffectiveness (insofar as meta-analyses should count for more than individual papers).

Thus, it is reasonable to conclude that interventions that stop short of removing offending content (i.e. interventions such as fact-checking and labeling) seem hardly able to reduce belief in misinformation.

Nor do outright bans and content removals significantly help with reducing access to misinformation. This is true for several reasons: first, in general, when the supply of a good is artificially restricted while demand remains unchanged, consumers will tend to seek alternative ways to obtain the good, for instance by patronizing less scrupulous competitors. By the same token, when the quantity of misinformation is restricted, but demand for it doesn’t change,Footnote 9 people can be expected to turn to other websites, with less stringent moderation policies, for the missing part of their information diets. Absent some totalitarian control over all information on the internet, such sites will be fairly easy to find.

Nevertheless, transaction costs of seeking out content that has been banned on mainstream social media are not negligible (e.g. it takes some time and effort to set up a new account, familiarize oneself with the new user interface, seek out the relevant profiles, etc.), and it’s possible to find substitute goods on the dominant platforms instead (e.g. those accounts that push the boundary of controversial discourse without crossing the threshold for bannable offenses). Still, while not everyone will actually seek out alternative information sources on other platforms, we can expect at least some users to do so.

Michael Huemer (Reference Huemer2021) makes a similar point. In his estimation, attempts to suppress ideas, when carried out by “factions” (ideological groups) that lack the complete control over discourse, are likely to be futile:

[In] the case of speech suppression by a faction that does not control the overall society, [when] they are opposed by at least one other faction with comparable power… the suppression does not succeed in what I assume is its main aim, to have more people accept [the faction’s] ideology. Rather, it has the effect of polarizing society. The people already aligned with your faction become more extreme, and they fail to hear or engage with opposing ideas. But the other faction also becomes more extreme. They aren’t prevented from hearing criticisms of your views, because they have their own content sources …. These sources not only publicize the challenges to your views that you’re trying to suppress, but they also publicize how you’re trying to suppress those challenges. Your side doesn’t hear this, because they don’t tune in to the enemy information sources, but the other side hears it. They are especially likely to draw the conclusion that your side is wrong …, and then that you are actually evil. All this pushes them further into their own bubble, as they start distrusting anything that comes from the institutions dominated by your faction. (2021, np.)

A telling illustration of this phenomenon – again, anecdotal – is that Twitter’s and Facebook’s bans on sharing and linking to anything related to the New York Post’s reporting on Hunter Biden’s laptop in the run-up to the 2020 US presidential election, which appear to have increased people’s interest in the story and its protagonist. As columnist Philip Bump puts it,

there’s no evidence that the restriction imposed by Twitter (or Facebook) actually kept interested people from learning about the story. Consider one metric by which we can evaluate interest in Hunter Biden: Google search interest. … It was only after the social media companies moved to restrict sharing of the story that search interest began to climb. The peak came in the early evening, at which point searches in the U.S. were five times what they’d been in the morning. … For the next few weeks, the New York Post’s reporting about Hunter Biden — with new reports not being restricted and the original story similarly unlocked — was available for consumption, now with the distinction of having been deemed unacceptably dangerous. Through the rest of October, search interest in Hunter Biden remained well above where it had been before the Post’s first report. (Reference Bump2022, np., emphasis in original)

The point about the ineffectiveness of social media content bans is strengthened when we look at governmental efforts to censor information as a comparison. In Eric Berkowitz’s blunt summary of his history of the topic, government “censorship doesn’t work. The ideas animating suppressed speech remain in circulation and, in the end, can become more effective for being forbidden. Will Durant’s conclusion about the First Emperor’s destruction of unapproved texts applies to most censorship: ‘The only permanent result was to lend an aroma of sanctity to the proscribed literature’” (Reference Berkowitz2021, 8, emphasis in original).

Berkowitz’s points are echoed in other works examining the history of censorship (see e.g. Strossen, Reference Strossen2018; Mchangama, Reference Mchangama2022; Kosseff, Reference Kosseff2023).

If state censorship does not succeed in actually suppressing ideas, its corporate facsimiles, such as content removals seem to stand even less of a chance of doing so. The reason is simple: corporate suppression lacks both the scope and the means that states can deploy to stamp out information. The penalties for violations are typically much more severe for flouting government rather than corporate censors (Messina, Reference Messina2023). It is therefore no surprise that it’s more difficult to find information that’s illegal to distribute than it is to find information that’s banned on Facebook.

2.4. Why the interventions don’t work

Empirical psychology supplies a credible explanation for the ineffectiveness of the existing solutions to SMIP. We know from countless studies that people don’t reason about politics in truth-conducive ways; specifically, they tend not to change their minds in proportion to new evidence they encounter but assess the credibility of the evidence through the lens of their political ideology (if it’s congruent with the ideology, it’s accepted; if incongruent, it’s dismissed and/or underweighted). Fact-checks and labels operate by simply supplying new evidence; so, it is difficult to expect that they will significantly move people to change their beliefs. The reason for this epistemically irrational (but instrumentally rational) behavior lies in people’s incentives. Hrishikesh Joshi puts the point well:

the cost-benefit calculus shifts drastically when it comes to many political or ideological beliefs. In those cases, individuals do not pay the costs for being wrong. The costs of being wrong about these things are largely externalized; they accrue to the collective. Being wrong about climate change or the causes of crime is a nearly costless mistake for the individual, for two reasons. First, the individual usually does not have enough influence to change society’s course of action on matters like these. Second, insofar as society gets things wrong, the individual only pays a fraction of the total cost. The opposite is true when it comes to buying a house, for example. Thus, we should expect our vigilance to kick in most in these latter sorts of cases.

On the other hand, tribal beliefs are often the subject of intense scrutiny. Those around us often care about what sorts of religious, political, or ideological beliefs we have. In many cases, they are willing to reward or punish us based on what we believe about these matters. It then pays to have beliefs that are congruent with those of the communities most important to our prospects. In past times, this would have been people physically closest to us—the village or town, for example. In modern times, the reference network often comprises… our professional or social circles, members of whom may be dispersed geographically. (Reference Joshi2024, 7)

There is no reason to think social media improves upon this dynamic in any meaningful way. The opportunity to discuss politics online doesn’t seem to affect the pre-existing incentive structure for most people, at least not in the desired direction. Instead, it may exacerbate incentives for partisanship: partisan signaling might be rewarded more in online contexts than in offline discussions. So, to the extent that political misinformation flatters and truth hurts, we can expect that people will sacrifice accuracy for partisan identity in political debates on social media.Footnote 10

However, in experimental conditions, manipulating people’s incentives by financial means reliably improves their ability to sort truth from falsehood. Paying people money for getting things right significantly reduces partisan bias (Bullock et al., Reference Bullock, Gerber, Hill and Huber2015; Prior et al. Reference Prior, Sood and Khanna2015) and significantly increases the ability to evaluate the accuracy of a news item (Panizza et al., Reference Panizza, Ronzani, Martini, Mattavelli, Morisseau and Motterlini2022; Ronzani et al., Reference Ronzani, Panizza, Morisseau, Mattavelli and Martini2024). The aim of political belief seems malleable. People can be incentivized to seek out the truth, even in politics.

As observed by Adam Gibbons (Reference Gibbons2023a), solutions such as fact-checks and many other methods seeking to improve the accuracy of political belief, do not target incentives (at least not strongly enough). They seek instead to prevent and to dissuade. This is why they fail.

3. Towards solving SMIP

In virtue of both their bias and their debatable effectiveness, extant fact-checking-based counter-misinformation strategies are far from optimal solutions to SMIP. It is, of course, possible that they are the best we can realistically hope for. Nevertheless, their defects should prompt a search for alternatives.

Generally, an alternative solution to SMIP constitutes an improvement over the status quo to the extent that it both reduces bias and increases effectiveness. It seems to me that reductions in bias alone, without accompanying increases in effectiveness, should also count as improvements (although perhaps merely symbolic ones). It is less clear whether increases in effectiveness, without accompanying bias reductions, should be considered improvements as well.

Imagine, for instance, a rather unlikely scenario where Meta decides to appoint the National Rifle Association (NRA) as the sole fact-checker on all matters gun-related (gun crimes, defensive gun use, gun laws, etc.). Suppose also that, in their role as fact-checkers, the NRA deploys a new science-based technique of labeling misinformation that changes minds much more effectively than any previous method. Even supposing that the NRA generally only labels genuine misinformation as false, and never mislabels true claims, it is far from obvious whether all things considered, the new solution (more effective fact-checks, same bias) is superior to the status quo with the less effective fact-checks – at least as far as proximity to the public square ideal is concerned. This is because the bias in addressing the selection challenge alone would nevertheless preserve the tilt of the whole platform in certain directions and away from others, for non-epistemic reasons aligned with the fact-checker’s values. In our scenario, more anti-gun rights misinformation would be successfully debunked, but much of the pro-gun falsehoods would remain unchallenged. This is no recipe for improving the marketplace of ideas.

This also means that very small reductions in bias, when coupled with significant increases in effectiveness, need not constitute an all-things-considered improvement over the status quo, in situations where the status quo included a significant amount of bias already. (Suppose that in conjunction with employing the new labeling technique, NRA’s bias is reduced by 0.1%; this is not relevantly different from the previous scenario). So, it’s not always the case that improvements along both dimensions simultaneously constitute an overall improvement.

I leave these concerns to a side, however. It is ultimately an empirical matter whether the solution I am proposing will overall improve the status quo. But the chances of it being so, in virtue of significantly improving along both dimensions simultaneously, are, I argue, reasonably high.

3.1. The marketplace of ideas needs more markets

In what follows, I develop a modest proposal for an alternative solution to SMIP. I argue that it would be desirable to explore, in practice (for example, through small-scale experiments), whether replacing fact-checks sourced from monopolistic experts with user-driven prediction markets can improve upon the status quo. The proposal is modest because I am not claiming that we should, all things considered, implement this solution across the board right now. Rather, I argue that the evidence justifies thinking that it’s a promising approach with a significant chance of improving the relevant outcomes. As a result, it is worth trying.

Empirical evidence mentioned above (and common sense) shows that when people are adequately rewarded for being accurate, they will strive to be (and succeed in being) more accurate. This is why monetary incentives reduce partisanship and increase informed-ness. Consequently, using monetary incentives that reward accuracy, in order to improve discourse on social media, would make people less partisan and better informed.

A “solution” that would consist of simply giving people money for, say, sharing accurate news stories is obviously a nonstarter. The system can be abused by users (not to mention bots) at will, relentlessly sharing the stories officially declared accurate, merely in order to gain the rewards. That aside, the question remains of who would have the authority of official determinations of accuracy for the purposes of issuing the rewards. Reposing this authority in any single corporation (or a coterie of fact-checking organizations) replicates the problems associated with the epistemic monopolists and monopsonists I already discussed, and possibly exacerbates them.

However, there is a different way of bringing monetary incentives into the marketplace of ideas that appears to avoid these problems. What I have in mind is an institution enabling individual users to create and participate in betting markets concerning an essentially unrestricted array of topics. I call this the prediction market-based solution (PMBS) to the problem of social media misinformation. If individual users are empowered to take part in bets concerning the future (or indeed even the present or the past), where they obtain financial rewards for getting the facts right, they will ipso facto be incentivized to become better informed and aim for accuracy rather than partisan acclaim.

Per Robin Hanson (Reference Hanson2013), prediction markets are

speculative markets trading assets designed to let people speculate on particular matters of fact, such as which horse will win a race. Final bet asset values are defined in terms of later official judgments about the facts in question. By construction, such assets are durable, identical, and can be created in unlimited supply. Betting and other speculative markets have been around for centuries, and for decades academics have studied their infoFootnote 11 properties. The main robust and consistent finding is that it is usually quite hard, though not impossible, to find info not yet incorporated in speculative market prices. In laboratory experiments, speculative markets usually aggregate info well, even with four ignorant traders trading $4 over four minutes (155)

How, specifically, do prediction markets translate financial incentives into probability estimates? Hanson offers one example:

Consider bets on the event D, that is, the Democratic Party wins the 2016 US presidential election. A “bank” (i.e., financial firm) could without risk accept $1 in payment for the pair of contingent assets, “Pays $1 if D” and “Pays $1 if not D.” This transaction carries no financial risk because exactly one of this pair will be worth $1 in the end. The expected dollar value of the asset “Pays $1 if D” is $p(D), that is, the probability of the event D. So if someone is willing to buy this asset for $0.60, we can interpret this roughly as their saying the chance the Democrats will win is at least 60%. And a market price of $0.60 can be interpreted as a consensus among traders that p(D) $ \approx $ 60%. After averaging in their minds over plausible scenarios, traders would have judged that Democrats win in about 60% of such scenarios. (158)

In existing digital prediction markets, such as the online platform Manifold, users can “[b]et on politics, tech, sports, and more [o]r create [their] own play-money betting market on any topic [they] care about” (Manifold, nd., np.). Topics are unrestricted (within limits of the law) and no real money is involved (users bet non-redeemable virtual currency instead). My proposal is neutral on what the rewards should be: it seems that using real money would be the most effective way of affecting incentives, whereas virtual non-redeemable currencies raise the fewest legal issues.

The basic idea behind PMBS is to enable (perhaps encourage) social media users to engage in betting market creation and betting market participation without leaving the social media platform. They could also have the option of challenging others to a bet, on the basis of the statements others make (perhaps, in addition to “like” and “repost” buttons, platforms could add a “wanna bet?” button). This would introduce a new set of incentives to the social media ecosystem, rewarding accuracy and precision, and punishing error and exaggeration, in ways that traditional fact-checking does not.

Empirically, prediction markets boast a superior track record over other methods of knowledge aggregation with respect to accuracy. Specifically, they are superior to the kinds of information exchanges associated with an idealized conception of social media as a public square (such as deliberation). Consequently, it is reasonable to suspect that a social media platform inclusive of an integrated prediction markets functionality would outperform epistemically monopolized information ecosystems on the relevant benchmarks (such as political knowledge and belief in misinformation), just as prediction markets outperform both expert predictions (Gardner, Reference Gardner2010) and deliberation (Sunstein, Reference Sunstein2006) on group decision quality (provided, of course, that a significant number of users take advantage of this new functionality).

The same mechanisms that make prediction markets outperform more deliberative and expert-driven methods of information aggregation also explain why they would improve political discourse on social media to a greater degree than fact-checking does.

3.2. PMBS reduces biases and improves accuracy

Users creating a betting market alone decide what the winning conditions of the bet are. In many cases, established “official judgments of the fact in question” would be available and their sources uncontroversial (who wins the Premier League; what the GDP numbers are for a given year; how many female directors are among the Academy Awards nominees).

In cases where official confirmation is more difficult to come by (was the inauguration of president X better attended than that of president Y?), most bet conditions would presumably include the sources according to which the result will be adjudicated. Any other user can accept the bet, and thus, implicitly, accept the epistemic authority of such sources. However, there would be no externally imposed epistemic monopolist granted the ultimate decision-making power to declare what the truth is.

If betting market creators were rewarded, e.g. by the platform, in proportion to the number of people that accept the bet, they would be incentivized to select those conditions for settling bets that would be acceptable to as many other users as possible; hence, they would tend toward choosing the suppliers of judgments with widely endorsed epistemic credentials. Since bets are, in a sense, adversarial, traders with very different expectations as to the final outcome would have to agree to the bet-settling conditions, including the procedure by which to determine what facts are obtained. Consequently, reputation for impartiality and objectivity would be an important consideration in selecting the sources on whose determinations the winning conditions rest, even on highly controversial topics. Market creators would tend to rely on sources that come closest to commanding universal respect in matters epistemic because that would secure the biggest possible pool of traders.

So, one way prediction markets have of reducing bias is by incentivizing the search for impartial sources – epistemic authorities without monopoly – to settle individual questions. A good proxy for source impartiality is the willingness (the consensus) of adversaries to voluntarily abide by the source’s determinations when it comes to settling bets.

Secondly, and more importantly, in speculative markets, traders are straightforwardly penalized for their biases, while people who are perceptive enough to suspect biased estimates and set aside their own wishful thinking stand to profit (recall the influence of monetary incentives on increased judgment accuracy).

Strong partisans and ideologues are more likely to allow their biases to cloud their judgment and, consequently, forgo a dispassionate analysis of the available data. This can lead to poor decision-making and, ultimately, financial losses. Unlike in typical political discussions, partisan bias does not pay when forecasting the future.

Instead, prediction markets incentivize traders to seek out the most accurate and up-to-date information, since this is the most reliable method of earning profits. This motivation can help overcome personal biases, as traders prioritize gains over retaining incorrect beliefs; however comforting. The competitive nature of prediction markets disciplines traders to continuously update their beliefs in response to new data. Those who make biased decisions are more likely to lose their bets.

Theoretical models of speculative markets predict that both attempts at market manipulation and the presence of biased traders encourage informed participation by others. Empirical data seem to bear this out (Hanson & Oprea, Reference Hanson and Oprea2009; Lott, Reference Lott2022).

It is reasonable to expect that, for a large number of people, rewards for partisan signaling of the sort described earlier by Joshi will be worth less than the monetary rewards for getting facts right. So, PMBS incentivizes good epistemic behavior better than the systems relying on monopolist fact-checkers.

Further, awareness of potential exposure to a betting challenge could encourage precision and care in formulating judgments. Knowing that my assertions can be subjected to a quick reality test where people stand to gain/lose real assets may impose a greater degree of rigor and moderation on what I say than simply knowing that I might get fact-checked.

As Bryan Caplan puts it,

Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry—or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views—that is, changes their minds—whether or not they retract their words.

How does this process work? Your default is to believe what makes you feel best. But an offer to bet triggers standby rationality. Two facts then come into focus. First, being wrong endangers your net worth. Second, your belief received little scrutiny before it was adopted. Now you have to ask yourself which is worse: Financial loss in a bet, or psychological loss of self-worth? A few prefer financial loss, but most covertly rethink their views. Almost no one “bets the farm” even if—pre-wager—he felt sure. (Reference Caplan2007, 130).

In contrast, under the present-day incentive structure, there is little (internalized) cost, and much reward, for ignoring evidence at the expense of political points-scoring. Being wrong on the facts is worth it, provided our political allies applaud what we say. Indeed, users may treat fact-checks on their posts and removals of their content “as badges of honor, earned by bravely standing up to the biased verdicts of overly powerful and influential social media organizations” (Gibbons Reference Gibbons2023a, np.). It is much harder to portray a betting loss in a positive light.

Betting markets, in virtue of their built-in incentives, will thus likely impose more rigor, and more moderation, on what people say. In our polarized age, this is a considerable benefit.

In sum, PMBS has the potential to improve on the fact-checking-based solutions to SMIP. The hypothesis that PMBS will be an improvement generates testable empirical predictions. An experiment comparing people’s misinformation susceptibility and political knowledge after exposure to traditional fact-checks versus PMBS does seem feasible. If I’m right, people exposed to PMBS would tend to score higher on these metrics.

4. Incentives, againFootnote 12

4.1. Participation

One could raise some worries. On the whole, people don’t seem especially interested in prediction markets. Manifold has 1,700 daily active users. That’s less than 0.001% of Facebook’s numbers (2,000,000,000 daily active users). Why think that substantial numbers of social media users would want to bother with betting if they aren’t interested already?

This is a very serious challenge facing my proposal; of course, it is an empirical question of whether platform-based prediction markets will attract users, and most people might remain indifferent to this sort of activity. There is some reason to think, however, that prediction markets could generate significant interest.

For starters, people respond to incentives, and monetary rewards are well-known to be effective motivators. If the rewards offered are high enough to outweigh whatever “psychic profit” one earns from partisan acclaim, people will be likely to at least try their hand at prediction markets.

Second, social media platforms have the potential ability to design user interfaces in such a way as to make betting a seamless and attractive part of the user experience, thus eliminating, as much as possible, the transaction costs of participating in speculative markets. Outstanding user interface design is, arguably, one of the main reasons for the success of social media platforms. If anyone can make online forecasting a mainstream pastime, it’s them. In general, we should take seriously the idea that making participation not just easy, but also fun, could help induce greater popularity of prediction markets among ordinary users.

Total success is not necessary. Even if only a substantial minority took advantage of the new features, some epistemic improvements over the status quo could be expected. These are not to be scoffed at. After all, small improvements to the effectiveness of other methods of fighting misinformation are widely celebrated.

4.2. Belief

But would people rely on the information provided by prediction markets in their deliberations and decision-making? This question, too, ultimately, admits of empirical determination. I will, however, venture to offer some armchair speculation as to why it’s not unreasonable to think that at least some people would.

As far as active traders are concerned, it seems to me that it would in general be psychologically much easier for them to ignore (and take steps to avoid) testimonial evidence from outside sources (e.g. in the form of fact-checks) than it would be for them to ignore evidence they already believe, especially when it comes to the evidence that was gathered with what they believe to be epistemically sound methods. Successful participation in prediction markets requires acquiring evidence in an epistemically respectable way. Thus, for active traders, that kind of evidence would be that much harder to disregard than the evidence they encounter when looking at the fact-checking labels. Consequently, betting encourages good epistemic practices as well as belief in, and practical commitment to, their results. Even if such results cut against our ideological priors, we can’t ignore them as easily as we can ignore inconvenient testimony.

On the other hand, when it comes to passive users who, despite platforms’ efforts, would not partake in prediction markets (while remaining aware of their outcomes), PMBS does not offer a straightforward path to epistemic improvement. Such people are likely to retain their old habits of thought, processing the outputs of prediction markets in the same epistemically deficient way that they process the content of the fact-checks they encounter (rather like they process the outputs of existent prediction markets under the status quo). As a result, it appears that PMBS’s potential effectiveness is likely to be proportional to the number of people becoming active traders.

5. Conclusion

PMBS is an attractive alternative to currently dominant methods of fighting misinformation. Both the latter’s shortcomings and the former’s promise strongly suggest it would be a good idea to investigate the possibility of realizing the promise. Thus, PMBS is worth trying.Footnote 13

Funding statement

Research for this work was funded by National Science Center, Poland, grant number 2022/47/D/HS1/01110.

Footnotes

1 See for example: Fritts & Cabrera, Reference Fritts and Cabrera2022; van der Linden, Reference van der Linden2023, and Thagard, Reference Thagard2024 for concerns raised by philosophers and scientists; McQuade, Reference McQuade2024, and Zadrozny, Reference Zadrozny2024 for concerns expressed by journalists; U.S. Dept. of State (2024) and European Commission (2024) for worries articulated by official bodies.

2 It’s worth noting that there is substantial debate on whether SMIP is correct and each of its constitutive theses has been thrown into question. For largely skeptical (of SMIP) reviews of the relevant evidence (and references to those who take the threats seriously), see Altay et al., Reference Altay, Berriche and Acerbi2023; Williams, Reference Williams2024. For recent worries about harms of misinformation, see Ecker et al., Reference Ecker, Roozenbeek, van der Linden, Tay, Cook, Oreskes and Lewandowsky2024.

3 Philosophical defenders of such approaches to what they term “fake news” include Fritts & Cabrera, Reference Fritts and Cabrera2022 and Castro & Pham, Reference Castro and Pham2020.

4 Proponents of such methods are many. For notable examples, see Lewandowsky et al., Reference Ecker, Roozenbeek, van der Linden, Tay, Cook, Oreskes and Lewandowsky2024; van der Linden, Reference van der Linden2023.

5 Notably, these remedies are typically employed by social media platforms seemingly of their own volition, rather than due to explicit laws and legislation, though the legal landscape is very fluid at the moment, especially in jurisdictions where free speech enjoys less robust statutory protections. Still, the extent of the involvement of government officials in tactics ranging from encouragement to jawboning the companies into doing more to fight misinformation is a matter of significant debate. Though important for a variety of normative questions surrounding content moderation on social media, this is a side issue for my purposes.

6 See Gibbons Reference Gibbons2023a for similar ideas.

7 This is not to say that they would be harsher only toward right-wing and right-leaning content. After all, some heterodox, explicitly left-wing publications would also advocate views that contradict the mainstream Democratic Party line.

8 It is a matter of contention whether other interventions, such as “prebunking” and accuracy nudges work better in this regard; but the effect-size of even the most-effective interventions is low to moderate. The question, as of now, appears moot, however, because most social media giants don’t seem to rely on such interventions as prebunking in their anti-misinformation efforts.

9 Of course people don’t demand misinformation qua misinformation: rather, people are in the market for news that is “unfiltered,” “real,” “uncensored,” “unbiased,” and that “they don’t want you to know,” among other things.

10 For additional in-depth discussions of functions of political belief, roughly congenial to the points made in this paper, see Williams Reference Williams2022a, Williams Reference Williams2022b, Gibbons Reference Gibbons2023b.

11 “Info” is a somewhat technical term for Hanson, meaning “clues and analysis that should change our beliefs” (2013, 151).

12 For answers to objections about prediction markets in general, see e.g. Hanson (Reference Hanson2013), Brennan and Jaworski (Reference Brennan and Jaworski2015), and Lott (Reference Lott2024).

13 I extend my gratitude to Roger Koppl, who provided invaluable feedback on an earlier version of this paper, and to the reviewer for this journal, who has offered generous advice and comments. Parts of the paper were presented at conferences in Manchester, Leuven, and Wroclaw. I am indebted to the audience members for fruitful discussion.

References

AllSides. (2023). Fact check bias chart & ratings. AllSides.com. https://www.allsides.com/media-bias/fact-check-bias-chart Google Scholar
Altay, S., Berriche, M., & Acerbi, A. (2023). ‘Misinformation on Misinformation: Conceptual and Methodological Challenges.’ Social Media + Society, 9(1), 113. https://doi.org/10.1177/20563051221150412 CrossRefGoogle Scholar
Berkowitz, E. (2021). Dangerous Ideas: A Brief History of Censorship in the West, from the Ancients to Fake News. Boston, MA: Beacon Press.Google Scholar
Brennan, J.F., & Jaworski, P. (2015). Markets Without Limits: Moral Virtues and Commercial Interests. New York: Routledge.CrossRefGoogle Scholar
Bullock, J.G., Gerber, A.S., Hill, S.J., & Huber, G.A. (2015). ‘Partisan Bias in Factual Beliefs about Politics.’ Quarterly Journal of Political Science, 10(4), 519578. https://doi.org/10.1561/100.00014074 CrossRefGoogle Scholar
Bump, P. (2022). No, Limiting the Hunter Biden Laptop Story Didn’t Cost Trump the Election. Washington Post. https://www.washingtonpost.com/politics/2022/12/05/trump-2020-election-hunter-biden-laptop/ Google Scholar
Caplan, B.D. (2007). The Myth of the Rational Voter: Why Democracies Choose Bad Policies. Princeton, NJ: Princeton University Press.Google Scholar
Castro, C., & Pham, A. (2020). ‘Is the Attention Economy Noxious?Philosophers’ Imprint 20(17), 113.Google Scholar
Ecker, U., Roozenbeek, J., van der Linden, S., Tay, L.Q., Cook, J., Oreskes, N., & Lewandowsky, S. (2024). ‘Misinformation Poses a Bigger Threat to Democracy than You Might Think.’ Nature 630, 2932. https://doi.org/10.1038/d41586-024-01587-3 CrossRefGoogle ScholarPubMed
European Commission. (2024). Tackling online disinformation. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/online-disinformation Google Scholar
Friedman, D. (2023). Checking the fact checkers. David Friedman’s Substack. https://daviddfriedman.substack.com/p/checking-the-fact-checkers Google Scholar
Fritts, M., & Cabrera, F. (2022). ‘Fake News and Epistemic Vice: Combating a Uniquely Noxious Market.’ Journal of the American Philosophical Association 8(3), 454–75. https://doi.org/10.1017/apa.2021.11 CrossRefGoogle Scholar
Gable, J., Brechter, H., DePoy, A., Mastrine, J., & Weinzierl, A. (2022). 6 ways fact checkers are biased. AllSides.com. https://www.allsides.com/blog/6-ways-fact-checkers-are-biased Google Scholar
Gardner, D. (2010). Future Babble: Why Expert Predictions Fail: And Why We Believe Them Anyway. Toronto, ON: McClelland & Stewart.Google Scholar
Gibbons, A.F. (2023a). ‘Bullshit in Politics Pays.’ Episteme, 121. https://doi.org/10.1017/epi.2023.3 Google Scholar
Gibbons, A.F. (2023b). ‘Bad Language Makes Good Politics.’ Inquiry, 130. https://doi.org/10.1080/0020174x.2023.2203164 CrossRefGoogle Scholar
Hanson, R. (2013). ‘Shall We Vote on Values, but Bet on Beliefs?The Journal of Political Philosophy 21(2), 151–78. https://doi.org/10.1111/jopp.12008 CrossRefGoogle Scholar
Hanson, R., & Oprea, R. (2009). ‘A Manipulator can Aid Prediction Market Accuracy.’ Economica 76(302), 304–14. https://doi.org/10.1111/j.1468-0335.2008.00734.x CrossRefGoogle Scholar
Hoes, E., Aitken, B., Zhang, J., Gackowski, T., & Wojcieszak, M. (2024). ‘Prominent Misinformation Interventions Reduce Misperceptions but Increase Scepticism.’ Nature Human Behaviour 8, 15451553. https://doi.org/10.1038/s41562-024-01600-4 CrossRefGoogle ScholarPubMed
Huemer, M. (2021). The costs of suppressing speech. Fake Noûs. https://fakenous.substack.com/p/the-cost-of-speech-suppression Google Scholar
Joshi, H. (2024). Socially motivated belief and its epistemic discontents. Philosophic Exchange. https://philarchive.org/archive/JOSSMB Google Scholar
Koppl, R. (2018). Expert Failure. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Koppl, R. (2023). ‘Public Health and Expert Failure.’ Public Choice, 195, 101–24. https://doi.org/10.1007/s11127-021-00928-4 Google ScholarPubMed
Kosseff, J. (2023). Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation. Baltimore, MD: JHU Press.CrossRefGoogle Scholar
Levy, A. (2020). The most liberal and conservative tech companies, ranked by employees’ political donations. CNBC. https://www.cnbc.com/2020/07/02/most-liberal-tech-companies-ranked-by-employee-donations.html Google Scholar
Lott, M. (2022). Track record. Election Betting Odds. https://electionbettingodds.com/TrackRecord.html Google Scholar
Lott, M. (2024). Government to ban all US election betting. Maximum Truth. https://www.maximumtruth.org/p/government-to-ban-all-us-election Google Scholar
Magness, P. W., & Yang, E. (2022). Who fact checks the fact checkers? A report on media censorship. AIER. https://www.aier.org/article/who-fact-checks-the-fact-checkers-a-report-on-media-censorship/ Google Scholar
Manifold. (n.d.). About. Manifold.markets. https://manifold.markets/about Google Scholar
McBrayer, J.P. (2024). ‘The Epistemic Benefits of Ideological Diversity.’ Acta Analytica. https://doi.org/10.1007/s12136-023-00582-z CrossRefGoogle Scholar
Mchangama, J. (2022). Free Speech: A History from Socrates to Social Media. New York: Basic Books.Google Scholar
McQuade, B. (2024). Disinformation is tearing America apart. Time. https://time.com/6837548/disinformation-america-election/ Google Scholar
Messina, J.P. (2023). Private censorship. New York: Oxford University Press.CrossRefGoogle Scholar
Meta. (2021). How Meta’s third-party fact-checking program works. Facebook.com. https://www.facebook.com/formedia/blog/third-party-fact-checking-how-it-works Google Scholar
Panizza, F., Ronzani, P., Martini, C., Mattavelli, S., Morisseau, T., & Motterlini, M. (2022). ‘Lateral Reading and Monetary Incentives to Spot Disinformation about Science.’ Scientific Reports 12(1), 5678. https://doi.org/10.1038/s41598-022-09168-y CrossRefGoogle ScholarPubMed
Prior, M., Sood, G., & Khanna, K.N. (2015). ‘You Cannot be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions.’ Quarterly Journal of Political Science 10(4), 489518. https://doi.org/10.1561/100.00014127 CrossRefGoogle Scholar
Ronzani, P., Panizza, F., Morisseau, T., Mattavelli, S., & Martini, C. (2024). How different incentives reduce scientific misinformation online. Harvard Kennedy School Misinformation Review 5(1), 113. https://doi.org/10.37016/mr-2020-131 Google Scholar
Strossen, N. (2018). Hate: Why We Should Resist It with Free Speech, Not Censorship. New York: Oxford University Press.Google Scholar
Sunstein, C.R. (2006). ‘Deliberating Groups Versus Prediction Markets (or Hayek’s Challenge to Habermas).’ Episteme 3(3), 192213. https://doi.org/10.3366/epi.2006.3.3.192 Google Scholar
Thagard, P. (2024). Falsehoods fly: Why Misinformation Spreads and How to Stop It. New York: Columbia University Press.CrossRefGoogle Scholar
U.S. Department of State. (2024). Disarming disinformation: Our shared responsibility. Global Engagement Center. https://www.state.gov/disarming-disinformation-our-shared-responsibility/ Google Scholar
Uscinski, J.E., & Butler, R.W. (2013). ‘The Epistemology of Fact Checking.’ Critical Review 25(2), 162–80. https://doi.org/10.1080/08913811.2013.843872 CrossRefGoogle Scholar
van der Linden, S. (2023). Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity. New York: W.W Norton & Company.Google Scholar
Walter, N., Cohen, J., Holbert, R.L., & Morag, Y. (2019). ‘Fact-Checking: A Meta-Analysis of What Works and for Whom.’ Political Communication 37(3), 350–75. https://doi.org/10.1080/10584609.2019.1668894 CrossRefGoogle Scholar
Williams, D. (2022a). ‘Identity-Defining Beliefs on Social Media.’ Philosophical Topics, 50(2), 4164. https://doi.org/10.5840/philtopics202250216 CrossRefGoogle Scholar
Williams, D. (2022b). ‘The Marketplace of Rationalizations.’ Economics and Philosophy, 39(1), 99123. https://doi.org/10.1017/s0266267121000389 CrossRefGoogle Scholar
Williams, D. (2024). ‘Debunking Disinformation Myths, Part 3: The Prevalence and Impact of Fake News.’ Conspicuous Cognition. https://www.conspicuouscognition.com/p/debunking-disinformation-myths-part-c7f Google Scholar
Wood, T., & Porter, E. (2019). The elusive backfire effect: mass attitudes’ steadfast factual adherence. Political Behavior, 41(1), 135163. https://doi.org/10.1007/s11109-018-9443-y CrossRefGoogle Scholar
Zadrozny, B. (2024). ‘Disinformation Poses an Unprecedented Threat in 2024 – and the U.S. is Less Ready than Ever.’ NBC News. https://www.nbcnews.com/tech/misinformation/disinformation-unprecedented-threat-2024-election-rcna134290 Google Scholar