Hostname: page-component-54dcc4c588-sdd8f Total loading time: 0 Render date: 2025-09-22T03:46:28.529Z Has data issue: false hasContentIssue false

Being Rational about Radical Environmentalism: A Response to Simpson and Handfield

Published online by Cambridge University Press:  07 August 2025

Neil Levy*
Affiliation:
Department of Philosophy, Macquarie University, Sydney, NSW, Australia Uehiro Oxford Institute, University of Oxford, UK
Rights & Permissions [Opens in a new window]

Abstract

Robert Simpson and Toby Handfield recently argued in this journal that my epistemic environmentalism is too radical. It implausibly collapses the distinction between rational response to evidence and group epistemic success and – on the mistaken assumption that this best conduces to epistemic success – requires uncritical deference to apparent experts. In this response, I argue that Simpson and Handfield badly mischaracterize my view. I neither collapse the distinction between ecological and epistemic rationality, nor do I countenance uncritical deference. I argue that environmentalism has the resources to give the right answers in the cases that Simpson and Handfield urge against my view.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Epistemic environmentalism is the thesis that improvements in epistemic functioning are best promoted by focusing on the epistemic environment in which we live, rather than on education in critical thinking or the inculcation of epistemic virtues, or the like. Simpson and Handfield (Reference Simpson and Handfield2025) are friends of environmentalism; it is due to this commitment that they take aim at what they see as the fundamental flaws in Bad Beliefs (Levy Reference Levy2021). The version of environmentalism I defend in that book threatens to bring the movement into disrepute; defending environmentalism requires showing that it is not committed to the implausibilities of my radical environmentalism.

Simpson and Handfield find two basic flaws with my approach. First, it is implausibly revisionist in its view of rationality, collapsing “the conceptual gap between what’s rational for the individual to believe, and what conduces to a group’s collective epistemic success” (6). Second, it lacks the resources to distinguish groupthink, or other sorts of “overly uncritical, chauvinistic, or parochial, forms of deference” (7) from appropriately critical deference to genuine epistemic authorities. These problems are connected via an implausible empirical claim: that uncritical deference always, or nearly always, conduces to group epistemic success. The reality, as Simpson and Handfield argue, is much messier and complex. Successful epistemic groups are made up of individuals employing a variety of strategies; success arises from some mix of deferential and critical strategies.

My environmentalism may well be more radical than Simpson and Handfield are comfortable with. In this response, though, I will show that it neither collapses the distinction between individual rationality and collective epistemic success nor is it committed to uncritical deference, as Simpson and Handfield conceive of it. It is able to accommodate – indeed, it calls for – the diversity of epistemic approaches that they rightly laud.

1. Being rational and being right

The fundamental flaw that Simpson and Handfield attribute to my environmentalism arises from its alleged insensitivity to the distinction between ecological rationality and what we might call epistemic rationality. The distinction is best explained via a brief discussion of the locus classicus of ecological rationality: Gerd Gigerenzer’s response to the heuristics and biases program. Kahneman and Tversky’s hugely influential work on heuristics and biases provided compelling evidence that human beings systematically deploy reasoning strategies that are epistemically irrational (Kahneman et al. Reference Kahneman, Slovic and Tversky1982). They overlook base rates, give vivid or recent tokens of a phenomenon excessive weight in deliberation, are sensitive to irrelevant information, and so on. Gigerenzer does not deny that these reasoning strategies are epistemically irrational (roughly, that they do not involve Bayesian update on evidence). But, he argues, they are ecologically rational nevertheless: they are reasoning strategies that do an excellent job of getting things right under time and resource constraints (Gigerenzer Reference Gigerenzer2002).

As Simpson and Handfield see it, I identify ecological rationality with rationality tout court. They need not deny Gigerenzer’s basic claim, that heuristics and biases are ecologically rational, to make their point that it is a mistake to conflate ecological and epistemic rationality. Gigerenzer himself is not committed to the claim that ecological rationality is all there is to rationality, nor to thinking that epistemic rationality that carefully controls for or avoids biases is not essential to epistemic success. While we might do well to employ heuristics and biases most of the time, there are contexts in which we should set them aside. We might find ourselves in a hostile epistemic environment (Stanovich Reference Stanovich2018), for example, or we may face a particularly high-stakes decision. We continue to need the notion of, and recourse to, epistemic rationality even if heuristics and biases conduce to epistemic success in most contexts.

I am overimpressed by ecological rationality, as Simpson and Handfield see things, because I take it to underwrite cumulative culture. Drawing on work in cultural evolution (Richerson and Boyd Reference Richerson and Boyd2008; Henrich Reference Henrich2015), I argue that uncritical deference has allowed the human species to flourish even in harsh environments and to develop complex and counterintuitive technologies that promote that flourishing. Copying prestigious individuals and group conformity, rather than carefully weighing evidence or assessing informants, is required for this success. Simpson and Handfield do not deny that (what looks to be) uncritical deference plays an important role in cumulative culture. Again, though, they point to evidence (primarily from modeling work) that strongly suggests that uncritical deference cannot be the whole picture.

Indeed, it is surely intuitive that uncritical deference can’t explain cumulative culture on its own. Cumulative culture requires innovation, or there would be nothing to accumulate. Deference must be critical if it is to achieve agents’ epistemic goals; if it lacks the resources to distinguish appropriate from excessive deference, my environmentalism cannot succeed on its own terms: it can’t explain epistemic success.

Simpson and Handfield point to real difficulties that arise for my account. Indeed, they arise for all accounts that aim to set down the conditions under which it is rational for agents to defer. These difficulties are inherent in the topic. It is plain, for example, that novices should not unthinkingly trust apparent experts just because they are apparent experts. The recent hostile takeover of the Department of Health and Human Services in the United States by people who claim expertise but are far out of step with the scientific consensus illustrates the folly of uncritical deference to those who possess the markers of epistemic authority. On the other hand, it is equally plain that novices cannot responsibly assess the first-order evidence (e.g., on vaccines) for themselves. We cannot avoid deferring to experts on these topics. We must – somehow – choose which experts to defer to, compensating for our inability to judge reliably on the topics themselves. It would be little surprise if my account of how we ought to balance deference and criticism was not satisfactory, given the difficulty of the question.

If my account is unsatisfactory, however, it is not for the reasons Simpson and Handfield give. They have my account badly wrong. It is, I strongly suspect, at least partly my fault: my presentation was confusing. Nevertheless, the reading they give of Bad Beliefs is inconsistent with the clear claims of the text.

2. The collapse of ecological and epistemic rationality

The claim that I collapse ecological and epistemic irrationality is a surprising one. In Bad Beliefs, I explicitly argue that true and false beliefs alike, including the bad beliefs I aim to explain, typically arise from cognitive processes that are not merely ecologically rational but also (what I called) directly rational; that is, they manifest (roughly) Bayesian belief update. I explicitly contrasted the account I offered with one that makes ecological rationality central. On that view, as I characterize it, bad beliefs are caused by biases or dispositions that are “irrational or arational, though having them might itself be rational” (xi, emphasis in original). I reject that view in favor of an account on which bad beliefs are “the product of genuinely and wholly rational processes. These processes are rational in the sense that they respond appropriately to evidence, as evidence” (xii). These processes are not rational because they get things right; rather, they tend to get things right because they are rational, and they remain rational even when they lead agents astray. Bad beliefs are explained by deference to testimony that the agent wrongly takes to be reliable; their mistake lies not in how they process this testimony, but in their false (but nonculpable) background beliefs about who is reliable.

In this light, Simpson and Handfield’s point that deference might not conduce to epistemic success is true but irrelevant. We should all countenance the possibility of a dissociation between rational belief update and epistemic success. Evidence can be misleading. When evidence is misleading, it is an all too familiar fact that rational agents may get things wrong through faultless belief updating. In many societies and for most of human history, for example, deference to medical authorities did not lead to a closer approximation to truth about appropriate treatments (Sterelny Reference Sterelny2007; de Barra Reference de Barra, Tehrani, Kendal and Kendal2023). Yet agents were almost certainly epistemically rational in such updating.

Simpson and Handfield ignore clear textual evidence in accusing me of collapsing the distinction between ecological and epistemic rationality. But it is at least partly my fault that they could make this mistake. They are misled by the prominence of the discussion of cultural evolution in Bad Beliefs and its place in the unfolding of the argument. Together, that prominence and place might suggest that I believe that we ought to defer because deference allows for cumulative culture. Cultural evolution provides striking examples of how deference outperforms the careful assessment of claims for plausibility, but these examples – the entire discussion of cultural evolution – are not essential to my argument. The argument is, in essence, a conceptual one: in assessing the rationality of belief update on implicit and explicit testimony, we have overlooked the importance of higher-order evidence. Once we take higher-order evidence into proper account, we see that many parade examples of supposedly irrational belief updates should be understood as ordinary Bayesian reasoning. People should, and do, update on expert testimony (to take a central instance) because it provides strong evidence for them.

There is, of course, a nonaccidental association between epistemic rationality and epistemic success: updating on evidence actually tends to conduce to true beliefs, because the world is not systematically misleading (no world compatible with life could be). But they dissociate often enough for us to have become sensitive to both, and also to be sensitive to the need to distinguish them.

3. Overly uncritical deference

Simpson and Handfield emphasize the need to distinguish ecological and epistemic rationality because it is my alleged inability to exhibit such sensitivity that is central to their criticisms. According to them, I defend an “uncritical deference norm” (3). While Simpson and Handfield accept that we ought not critically to examine every issue for ourselves, “nor is uncritical deference the answer – that runs a high risk of stagnation and dogmatism” (13).

The point is well taken. Once again, however, they have my view wrong. I do not defend uncritical deference. Indeed, Bad Beliefs (and much of my other work) highlights the need for mechanisms of epistemic vigilance. These mechanisms are species-typical, plausibly evolved mechanisms for assessing testimony which human beings automatically deploy (Sperber et al. Reference Sperber, Clément, Heintz, Mascaro, Mercier, Origgi and Wilson2010; Harris et al. Reference Harris, Koenig, Corriveau and Jaswal2018). As I repeatedly emphasize, we do not defer uncritically; instead, we are sensitive to a variety of features of the testifier (whether they are likely to be competent on the question on which they testify, whether they have an interest in misleading us, and so forth) and of the content of the testimony (centrally, how plausible it antecedently is). These are just the properties that a Bayesian rational agent ought to be sensitive to.

Nudges provide a good example of how deference to testimony (in this case, implicit testimony) is never uncritical. John Doris argues that nudges and related influences on cognition threaten agential autonomy, because they are “deeply unintelligent” (Doris Reference Doris2018: 50). Were our responses to nudges uncritical, this complaint would be fully justified. But – I argue – we do not respond to testimony, implicit or explicit, uncritically. Rather, we filter it for credibility. The making of options salient (say by framing how they are presented) is a communication: it conveys the framer’s sense of what matters, and their audience (implicitly) understands the communicative intention. Audiences treat framings and other ways of making options salient as the recommendations they are. And, like all recommendations, their effect on cognitive processing is sensitive to their credibility. We integrate them with our other information, and we assess them for plausibility. Doris cites the ballot order effect as a principal example of the processes he describes as “deeply unintelligent,” but in fact, the ballot order effect is significant only for one group of voters: those who have nothing else to go on in assessing candidates (Pasek et al. Reference Pasek, Schneider, Krosnick, Tahk, Ophir and Milligan2014). That’s what we should expect: ballot order provides a recommendation, but since we know little about the values or competence of those who provide it, it is easily outweighed by other sources of information.

What holds for nudges holds generally, I argue. Discussing rival schools of cultural evolution, I applauded the (so-called) Parisian school (e.g., Sperber Reference Sperber1996; Morin Reference Morin2016) for correcting the perception that the mechanisms underlying the transmission of culture are reflexive and unintelligent. But, I argued, it is the perception they correct; though the mechanisms the Parisians emphasize are intelligent, so are the mechanisms emphasized by the (so-called) California school that I leaned more heavily on. “Imitation, California-style, is not reflexive and automatic. Instead, it manifests a great deal of intelligence […] In fact, even our apparently automatic imitation itself manifests intelligence (it’s a major aim of this book to show that’s true),” I wrote (Levy Reference Levy2021: 47–8).

Whether I’m appropriately described as a radical environmentalist is a judgment call. Whatever we call my view, it is simply false that I collapse the distinction between ecological and epistemic rationality, and that I call for uncritical deference.

4. More rhetorical than radical?

That brings us to a final criticism of my view that one can extract from Simpson and Handfield. In effect, they confront me with a dilemma. Either I am a radical environmentalist, espousing implausibly uncritical deference and collapsing epistemic rationality into ecological rationality, or I am really just another of the virtue epistemologists I claim to criticize, differing from them only in “subtleties of emphasis” (11). If I don’t accept uncritical deference, what can distinguish my view from theirs?

I don’t think my view is well described as a virtue epistemology. There’s a stark difference. Virtue epistemology argues that the epistemic virtues are hard won. They may indeed be rare; at any rate, they take years of work to inculcate. I argue that the sensitivity to the markers of reliability we exhibit – the epistemic competencies that ensure that we are not, in fact, uncritical in our deference – emerges in the course of normal development. That’s why there’s little to be gained in attempting to inculcate the epistemic virtues. At the very best, we may be able, with great effort, to make very marginal improvements in people’s discernment. For the most part, they’re doing very well as it is.

It might be helpful to conclude by working through how my account would apply to the case that Simpson and Handfield put forward to illustrate its limitations. The case involves a follower of a doomsday cult. The leader of the cult is clever and charismatic; on that basis, followers regard them as an epistemic authority (we can easily build in any other markers of reliability we please at this point: possession of academic credentials, a good track record with predictions, and so on). The leader has prophesied that the world will end on a certain date. When it fails to do so, the leader concocts “an after-the-fact rationalization” for why the world didn’t end on that date.

Simpson and Handfield argue that this case illustrates what is wrong with my view: It is an effective counterexample to the view that we ought to defer uncritically. Agents ought to be sensitive to signs that testifiers are unreliable – in this case, evidence that they’re a “liar or a bullshitter” (10) – and set aside their testimony in such cases. As a matter of fact, Simpson and Handfield argue, people regularly do exhibit sensitivity to this sort of property. Of course, I don’t disagree that we should be critical in our deference: we generally are, I claim.

Perhaps, Simpson and Handfield recognize that I want an account on which agents are epistemically rational and appropriately critical, but they take their case, and plentiful actual cases of bad belief formation, to illustrate that no such account can render these bad beliefs epistemically rational. Given how such beliefs are actually formed, if there’s a sense in which the agents who form such beliefs do so rationally, it can’t be epistemic rationality that’s at issue. Those who form bad beliefs about climate change (one of my central examples) don’t exhibit the kind of appropriate care in deference that would qualify them as epistemically rational. Indeed, there are plenty of even more egregious cases, like the bizarre conspiracy theories that seem to have flourished in recent years. For example, when QAnon was at its most fevered pitch, a full 50% of Americans reported that they believed it was true or were unsure whether it was true (Ipsos 2020). But QAnon is a truly bizarre, and plainly ad hoc, conspiracy theory. Surely, this is the proof of widespread uncritical deference, and belies the claim that ordinary agents deploy mechanisms of epistemic vigilance well.

I have several lines of reply. On the one hand, I point to evidence that sometimes true claims are highly counterintuitive. From cultural technologies (e.g.,, the detoxification techniques employed by cultures around the world) to contemporary science, what seems plausible at first glance might be a bad guide to what is true. It is a mistake to point to the apparent implausibility of a claim as evidence that the agents who believe it are necessarily (epistemically) irrational. It is central to my account that though we should (and do) integrate testimony with other sources of evidence in coming to a view, the higher-order evidence that stems from testimony from those who have markers of high credibility (by the lights of those who are in receipt of the testimony) is appropriately given a special weight; a weight that is often sufficient to outweigh antecedent implausibility.

Simpson and Handfield do not emphasize antecedent implausibility, however, but the evidence of unreliability that stems from blatantly post hoc rationalizations of failure. Surely we should be sensitive to this sort of evidence? If that means we should abide by the rule “heavily discount testimony whenever you take yourself to detect post hoc rationalizations,” then, the answer is no: we shouldn’t exhibit such sensitivity. In fact, climate change skeptics frequently accuse climate scientists of just such post hoc rationalizations, on the basis (for example) that they now refer to “climate change” allegedly because frequent and often extreme cold snaps belie predictions of “global warming.” In most cases, when we take ourselves to detect problems with the content or sources of expert testimony sufficient to throw it into doubt, we should think we’ve made a mistake. We can come to see that expert testimony is flawed, even when we ourselves are not experts. But we should be very, very slow to think it is they who have made an error, and not us, and we should never make such judgments without enlisting others who do have expertise (Levy Reference Levy2022b).

For all that, people should come to see that in many cases (including the case of QAnon), testimony from conspiracy theorists is laughably thin. Its sources do not have the kind or degree of expertise or other markers of insight that would provide their claims with significant warrant. The major claims made (e.g., about widespread satanic rituals involving the sexual abuse of children at the highest levels of government) are often highly implausible on the face of it. Moreover, the evidence provided for these claims lends them little support (consider claims that the COVID-19 pandemic was planned, citing evidence that “delta omicron” is an anagram of “media control”). How could rational agents believe that, on this sort of basis? My answer is that they often don’t. People regularly report believing things they don’t in fact believe, because belief reports play multiple roles besides sincere expression of mental states. People report beliefs to troll others, to express support for one side of politics, or for sheer entertainment (Levy Reference Levy2022a, Reference Levy2024; Ross and Levy Reference Ross and Levy2023).

Finally, I agreed with Simpson and Handfield that mixed epistemic strategies outperform deference when it comes to generating significant truths. Everyone (experts included) ought to give very significant weight to the testimony of those who are apparently experts in spheres in which they themselves lack a high degree of competence. But they should (and do) combine this deference with a willingness to dissent in the narrow sphere in which they’re genuinely expert, where they have the capacity responsibly to grapple with the first-order evidence. Good dissent is local dissent; it takes place against the background of claims accepted on the word of well-placed others (Levy and Varley Reference Levy and Varley2024). Epistemic communities must and do manifest a variety of epistemic strategies, with every agent deferring pervasively, but most also willing to dissent when they are able to do so responsibly.

Is the result an environmentalism that Simpson and Handfield can live with? At the very least, it is not the caricature they attribute to me. In my actual view, there is little to be gained by inculcating epistemic virtue, not because deference should be uncritical, but because it is already critical. We do not need to educate people in the features of testimony or its sources that render it reliable. Rather, we need to do a better job at ensuring that detection of these features is better correlated with their actual possession, and we can only do that by cleaning up our epistemic environment.Footnote 1

Footnotes

1 I am grateful to a reviewer for Episteme for illuminating comments. This research was funded in whole, or in part, by the John Templeton Foundation (grant #62631) and the Arts and Humanities Research Council (AH/W005077/1). The funders had no role in the preparation of this manuscript or the decision to submit for publication. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.

References

de Barra, M. (2023). ‘Cultural Evolution of Ineffective Medicine.’ In Tehrani, J.J., Kendal, J. and Kendal, R. (eds), The Oxford Handbook of Cultural Evolution. Oxford: Oxford University Press.Google Scholar
Doris, J.M. (2018). ‘Précis of Talking to Our Selves: Reflection, Ignorance, and Agency.Behavioral and Brain Sciences 41, 112.10.1017/S0140525X16002016CrossRefGoogle ScholarPubMed
Gigerenzer, G. (2002). Adaptive Thinking: Rationality in the Real World. Oxford: Oxford University Press.10.1093/acprof:oso/9780195153729.001.0001CrossRefGoogle Scholar
Harris, P.L., Koenig, M.A., Corriveau, K.H. and Jaswal, V.K. (2018). ‘Cognitive Foundations of Learning from Testimony.Annual Review of Psychology 69(1), 251–73.10.1146/annurev-psych-122216-011710CrossRefGoogle ScholarPubMed
Henrich, J. (2015). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton: Princeton University Press.10.2307/j.ctvc77f0dCrossRefGoogle Scholar
Ipsos. (2020). More than 1 in 3 Americans believe a ‘deep state’ is working to undermine Trump, Ipsos. Available at: https://www.ipsos.com/en-us/news-polls/npr-misinformation-123020 (Accessed: 27 August 2021).Google Scholar
Kahneman, D., Slovic, P. and Tversky, A. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.10.1017/CBO9780511809477CrossRefGoogle Scholar
Levy, N. (2021). Bad Beliefs: Why They Happen to Good People. Oxford: Oxford University Press.10.1093/oso/9780192895325.001.0001CrossRefGoogle Scholar
Levy, N. (2022a). ‘Conspiracy Theories as Serious Play.Philosophical Topics 50(2), 119.10.5840/philtopics202250214CrossRefGoogle Scholar
Levy, N. (2022b). ‘Do Your Own Research!Synthese 200(5), 356.10.1007/s11229-022-03793-wCrossRefGoogle ScholarPubMed
Levy, N. (2024). ‘Believing in Shmeliefs.Ergo, an Open Access Journal of Philosophy 11, 18.10.3998/ergo.6158CrossRefGoogle Scholar
Levy, N. and Varley, R. (2024). ‘Mind the Guardrails: Epistemic Trespassing and Apt Deference.Social Epistemology 0(0), 117. doi: 10.1080/02691728.2024.2400560.CrossRefGoogle Scholar
Morin, O. (2016). How Traditions Live and Die. London; New York: Oxford University Press.Google Scholar
Pasek, J., Schneider, D., Krosnick, J.A., Tahk, A., Ophir, E. and Milligan, C. (2014). ‘Prevalence and Moderators of the Candidate Name-Order Effect: Evidence from Statewide General Elections in California.Public Opinion Quarterly 78(2), 416–39.10.1093/poq/nfu013CrossRefGoogle Scholar
Richerson, P.J. and Boyd, R. (2008). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press.Google Scholar
Ross, R.M. and Levy, N. (2023). ‘Expressive Responding in Support of Donald Trump: An Extended Replication of Schaffner and Luks (2018).Collabra: Psychology 9(1), 68054.10.1525/collabra.68054CrossRefGoogle Scholar
Simpson, R.M. and Handfield, T. (2025). ‘Against Radical Epistemic Environmentalism (or Why Uncritically Deferring to Authority is Still Irrational).’ Episteme, 115. doi: 10.1017/epi.2025.23.CrossRefGoogle Scholar
Sperber, D. (1996). Explaining Culture. 1st edn. Oxford, UK; Cambridge, MA: Blackwell Publishers.Google Scholar
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G. and Wilson, D. (2010). ‘Epistemic Vigilance.Mind & Language 25(4), 359–93.10.1111/j.1468-0017.2010.01394.xCrossRefGoogle Scholar
Stanovich, K.E. (2018). ‘Miserliness in Human Cognition: The Interaction of Detection, Override and Mindware.Thinking & Reasoning 24(4), 423–44.10.1080/13546783.2018.1459314CrossRefGoogle Scholar
Sterelny, K. (2007). ‘SNAFUS: An Evolutionary Perspective.Biological Theory 2, 317–28.10.1162/biot.2007.2.3.317CrossRefGoogle Scholar