1 Introduction
In February 2020, when it was recognized that Covid-19 was developing into an international epidemic (the World Health Organization (WHO) declared it to be a pandemic on March 11, 2020), people in California were preparing for a major event. Journalists and documentary filmmakers followed a 64-year-old American limousine driver who was about to take off on a self-built steam rocket. The endeavor wasn’t harmless, but not entirely amateurish either: “Mad” Mike Hughes had reached a height of 500 meters with a similar rocket a year earlier, and now he sought to reach even higher altitudes to get a more objective picture of the shape of the Earth. Hughes had recently improved his rocket after learning from previous mistakes, and many people watched its launch live on site and on television. Unfortunately, the parachute detached from the rocket a few moments after liftoff, and Mad Mike crashed to the ground.
Mike Hughes died a terrible death, but quite possibly, it would not have been on the news for weeks if he had financed his adventure himself. Although he did not subscribe to their beliefs, his attempts were supported by the association of flat-Earth believers, a group who thinks that the Earth is actually flat, and that scientists and politicians have been keeping people in ignorance for centuries. It is unclear how many people believe in flat-Earth theory (according to one survey, it could be up to 10 percent of the American population),Footnote 1 but the theory itself is taken to be representative of contemporary pseudoscience.
While pseudoscience is on the rise, its detractors are also making themselves heard, trying to unmask its various iterations, and supplying explanations for this troubling ascent of charlatans, impostors, swindlers, and even rogue practitioners of genuine science. For every pseudoscientific book, there is at least one debunking article, monograph, or podcast. Nonetheless, people still drift off to the fringes and gray zones.
When it happens, it is hard to reverse the process, as people prefer ideas that somehow reinforce their preconceived beliefs: We like to cherry-pick our news and facts. It’s not just that we consciously and explicitly accept only those reports and scientific forums that support what we believed in the first place, or present information and data we always wanted to hear (say, that a vegan diet can be beneficial if properly managed). Naturally, this happens all the time.Footnote 2 But we also tend to regard information that contradicts our views or is less pleasing to us with greater suspicion (such as the fact that food from genetically modified organisms (GMOs) is considered safe by most currently available research, despite widespread public suspicion).Footnote 3 Studies have shown that our political preferences also affect our thinking and weighing of evidence: Naomi Oreskes and Erik M. Conway have argued in detail (based on the existing literature and their own research into the history of science) that many late twentieth-century anti-science campaigns had their roots in political partisanship and polarization, rather than a lack of education or understanding. “The sources of science rejection lay not in the science itself, but in prior political and ideological beliefs and commitments” (Oreskes and Conway Reference Oreskes and Conway2022, 100).
During the second half of the twentieth century, historians, philosophers, sociologists, psychologists, and communication experts have done significant work to better understand, evaluate, and correct the pseudosciences. Countless studies have been devoted to the nature and method of science (from professional academic articles to children’s books) to convey a clear picture to the targeted audiences about the differences between science and pseudoscience. One might think, or hope, that we’re long past the gross oversimplifications about what science is, how it works and how it is accepted and rejected by the scientific community. To a degree, that is certainly true. But as Oreskes (Reference Oreskes2019, 882) has noted recently, despite the “tremendous amounts of detailed description,” it “has had the effect of making it more difficult, rather than less, to come up with a general characterization of scientific knowledge and therefore to articulate what distinguishes it from other forms of knowledge.” That is, the deeper we see what science is, the harder it gets to come up with once-and-for-all universal characterizations. Apparently, greater familiarity with all the muddle, fraud, approximations, probabilities, changing methodologies and individual biases and collective abuses involved in science has not helped much in the public’s eye. Since the outbreak of the pandemic, articles and newspaper notes about public trust in science are still among the hot topics in philosophy of science (Goldenberg Reference Goldenberg2023).
The problem is not new, of course; sociologists of science have devoted decades of (field) work to understanding the machinery and workings of science and individual scientists. Their writings, often replete with constructivist and relativist images of scientific knowledge (to varying degrees and with different scopes), have caused much disturbance among scientists and the public (see Labinger and Collins Reference Labinger and Collins2001). Nevertheless, sociologists and historians have given philosophers invaluable perspectives and new concepts for approaching their own concerns, and pseudoscience is no different. This Element will cite and mobilize old and new literature that defines what is at stake for philosophers when they try to understand, debunk, and defeat pseudoscience. But that is a long journey, and we are not yet ready to declare the winner. The road is full of obstacles, and in the forthcoming sections, we aim to address some of these, centered around the so-called demarcation problem.
For many philosophers of science, the problem of demarcation concerns how we can distinguish science from pseudoscience, for others it is about how we can demarcate justified or warranted knowledge from unjustified and fake beliefs. Justified and warranted beliefs and their relation to legitimate and illegitimate (scientific) dissent are important topics of philosophy of science that is just about to explode in the literature. It also has an obvious take on good and bad, strong and weak science, which is, in turn, related to science and pseudoscience as well. Nonetheless, as this Element aims to survey and discuss the state-of-the-art of the history and philosophy of the demarcation problem, we will be concerned here with more traditional approaches with occasional remarks about that literature.
Besides intellectual and cognitive hygiene, however, the demarcation problem bears on our everyday life as well. If a theory is debunked or labeled pseudoscientific, then scientists and certain policy makers would argue that it should not be included in the classroom (like creationism against evolutionary theory), or should not be permitted to use in a hospital (like acupuncture), or suggested as official remedy within mental health services, but even if some are allowed, it might be debated whether social insurance shall cover it (as it partially does cover homeopathy in Germany and does not in Hungary).
There is science policy, and then there is economics. Often, scientists label others as pseudoscientific to attain an epistemologically superior position over them to secure social legitimacy and resources. If you end up on the pseudoscientific side of the debate, your chances to get any recognition or funding from the scientific community are basically zero. When chemists Martin Fleischmann and Stanley Pons announced at a huge press conference that they were able to produce cold fusion (a special form of nuclear reaction that would enable scientists to create free energy at room temperature) in 1989, millions of dollars in funding were redistributed and offered to the University of Utah where they worked. Their announcement was considered so important that even major national institutes were willing to provide funding. In the weeks that followed, after countless unsuccessful attempts by other chemists and physicists to reproduce the experiments and results of Fleischmann and Pons, which revealed the flaws (whether intentional or not) in their work, mainstream physicists concluded that cold fusion was an unattainable idea, and the money taps were turned off.Footnote 4
Besides money, however, lives are also frequently at stake. Anti-vaxxers caused a lot of damage to human life, the medical system, and those in their environment when they spread the rejection of vaccinations during the Covid-19 pandemic – a recent article has argued that more than 200,000 Covid-related deaths could have been prevented in the United States with vaccination (Jia et al. Reference Jia, Hanage and Lipsitch2023). Similarly, AIDS denialism resulted in deaths across the globe, but especially in South Africa, where it prompted the country’s president to deny access to medication in the early 2000s; one study has put the number at more than 300,000 (Chigwedere et al. Reference Chigwedere, Seage, Gruskin, Lee and Essex2008). And that is just the tip of the iceberg that has become visible in the last few decades.
Our aim here is not to solve age-old puzzles and settle disputes, but rather to provide the reader with a picture of where we are now, where the debate stands, and what is considered the state of the art from a broad philosophy of science perspective.
“Pseudoscience” is a strange term, as no one declares that they are engaged in pseudoscience. Michael Gordin has recently emphasized that pseudoscience
is a negative category, always ascribed to somebody else’s belief, not to characterize a doctrine one holds dear oneself. People who espouse fringe ideas never think of themselves as “pseudoscientists” […] In that sense, there is no such thing as pseudoscience, just disagreements about what the right science is.
Larry Laudan (Reference Laudan, Cohen and Laudan1983, 119) also warns us that most demarcation projects about pseudoscience are out to get their targets and embarrass rivals: “In every case, they used a demarcation criterion of their own devising as the discrediting device.” Roger Cooter (Reference Cooter and Grim1990, 156) goes so far as to claim that “at least since the beginning of the consolidation and ossification of the capitalist order in the seventeenth century, the label ‘pseudoscience’ (or the appropriate synonym) has played an ideologically conservative and morally prescriptive social role in the interests of that order.”
The term “pseudoscience” is not new – its Latin equivalent was used already in the first half of the seventeenth century, while its English cognate emerged in the late eighteenth century.Footnote 5 The discussion about what beliefs are worth accepting because of their trustworthy, warranted, and scientific character – and in parallel, which beliefs are illegitimate and unfounded – dates back to Ancient Greece (Laudan Reference Laudan, Cohen and Laudan1983). Since the nineteenth century, with the increasing institutionalization and professionalization of science, however, the rise of pseudoscientific theories and beliefs (especially concerning medical issues – see, for example, Wrobel Reference Wrobel1987), poses new challenges about education, public policy, and the social status of science.
It is not enough to divide beliefs and theories into scientific and nonscientific, because there are numerous things that no one would call pseudoscientific, even though they are evidently and purportedly not scientific.Footnote 6 Works of art and literature, religion, and many other everyday activities do not follow the path and methods of science and are thus nonscientific or unscientific. But usually, this poses no problem; complications arise when a nonscientific field tries to be something else. As “pseudo” means “false,” pseudoscience is a form of activity that pretends to be science or scientific, while in fact it is not.Footnote 7 But there are various ways for pseudoscientific activities to pretend and to fail: Sometimes, pseudoscientists intentionally fake and forge data and thus they deceive colleagues and the targeted audience. Other times, the evidence for a pseudoscientific theory is simply too shaky, uncertain, and possibly even motivated by nonscientific factors, but its adherents refuse to see this and pretend that what they are dealing with is pure and factual hard scientific evidence. Of course, one might argue that this is just “bad science” and not pseudoscience, but presumably there is less of a strict line between these categories, and it matters how actually and locally one becomes a bad scientist or a pseudoscientist. Less surprisingly, then, Ben Goldacre (Reference Goldacre2009) has titled his book on traditional pseudoscientific activities as “Bad Science,” mashing up these categories.
But things are even more complicated; when one is faced with a false and rejected scientific theory, it would be quite natural to say that a false scientific theory is still a scientific theory. But, as we will in Section 2, Karl Popper thought, for example, that one can end up among pseudoscientists even by clinging to rejected scientific theories. There are many ways, then, for something to end up on the pseudo-side of scholarly activity.
However, there is a link between these various iterations of pseudoscience. Martin Gardner (Reference Gardner1957, 3) has noted that after the Second World War, “the prestige of science in the United States has mushroomed like an atomic cloud.” As evidenced by the growth in science funding, packed lecture halls, and the increasing number of students earning science degrees, the public’s estimation of science underwent a major upswing. Besides that, however, there was the “rise of the promoter of new and strange ‘scientific’ theories. He is riding into prominence, so to speak, on the coat-tails of reputable investigators” (Gardner Reference Gardner1957, 3). Whenever science is on the rise, pseudoscientists follow, eager for a seat at the table and a piece of the pie. Gardner’s book, from which the previous quote is taken, even bears the title Fads and Fallacies in the Name of Science.
As Michael Gordin (Reference Gordin2023, 104) has written, “pseudoscience is the shadow of science […]. The higher the status of science, the sharper the shadow and the more robust the fringe.” Because science and pseudoscience are so closely, if not intimately, related, Gordin (Reference Gordin2023, 104) has drawn a quite pessimistic conclusion: “The only way to eliminate pseudoscience is to get rid of science, and nobody wants that.”
Things are trickier than that, however. Sometimes, people pursue an activity that is deemed “pseudoscientific” by the community of scientists, although its advocates are not pretending to be scientific and may even condemn scientific practice altogether. Nonetheless, on certain occasions, these ideas and their presentations fill the role that science has in our society – that is, they propose an ontology of what exists (e.g., certain psychic and spiritual beings beyond the physical ones), how our mind and body work (like Scientology), or how to behave, what to eat, and how to treat others. Although alternative and natural healing practices reject the requirement of being scientific, they still pretend to provide accurate, reliable, and warranted beliefs and theories, and thus their function and role is akin to that of science, thereby justifying their inclusion among the pseudosciences (perhaps a better term for them would be “charlatanry” or “quackery”).
And then there is science denialism, an activity that intentionally seeks to deny specific scientific theories, either by pointing out supposed mistakes or by claiming that they are biased because of their alleged political and social origins. Science deniers are usually not just rejecting a certain theory, but also the facts it aims to explain; examples include denialism of climate change, the Holocaust, relativity theory, AIDS, vaccinations, and tobacco disease. Denialists frequently do not promote any specific alternative theory, but as Hansson (Reference Hansson2017) has argued, science denialism and pseudoscience share so many features that the former can be seen as a form of the latter.Footnote 8
That being said, all these fakes, misuses, abuses, errors, and quackery are of some value to science. As Philipp Kitcher has argued long ago in his book on scientific creationism,
Ironically, philosophers of science owe the Creationists a debt. For the scientific Creationists have constructed a glorious fake, which we can use to illustrate the differences between science and pseudoscience. By examining their scientific pretensions, I have tried to convey a sense of the nature and methods of science.
Recently, Gordin (Reference Gordin2023) has described four types of activities and theories that are not scientific, and occasionally not entirely pseudoscientific, but lurk in a gray area called the “fringe.” He writes about “vestigial sciences,” those issues that “once counted as science but were rejected, so that they have morphed today into being classed as pseudosciences” (Gordin Reference Gordin2023, 17). Examples include astrology and alchemy.Footnote 9 The second category are the “hyperpoliticized sciences,” namely those theories that are upheld mainly or “purely as arms of a particular political ideology” (Gordin Reference Gordin2023, 31), including Aryan Physics during the Nazi era, Lysenkoism in the Soviet Union, eugenics across the globe, or HIV/AIDS denialism in South Africa during the presidency of Thabo Mbeki. Gordin (Reference Gordin2023, 44) calls “establishment sciences” those sciences that “by definition replicate – or imitate, or copy, or counterfeit – the establishment they oppose.” Phrenology, scientific creationism, cryptozoology, and cosmic catastrophism are among his examples for such dubious but systematic takes on science. The fourth and last category does not have a specific name besides “mind over matter,” and it includes those approaches that attribute mystic or extraordinary powers to the human mind; parapsychology, mesmerism, and spiritualism all endow the human mind with such scientifically intractable capacities.
In twentieth-century philosophy of science, the discussion on pseudoscience usually centered on the so-called demarcation problem. Starting mainly with Karl Popper, philosophers sought to find a suitable, single criterion that would demarcate all scientific theories and beliefs from pseudoscientific ones. Using the demarcation criterion, one was supposed to be able to judge and decide on each and every occasion whether a purported theory was indeed scientific or just pretended to be. In the latter case, if something was judged to be merely masquerading as science, the negative label of “pseudoscience” meant the end of it within the scientific community.
This issue is of practical importance because it directly affects various aspects of society, including education, policymaking, and the public understanding of scientific matters. Clear distinctions between scientifically verified treatments and those based on pseudoscience could be crucial for patient safety and effective healthcare. In education, students must be taught reliable and up-to-date scientific theories and principles. In policymaking, demarcation helps legislators and regulators make informed decisions based on reliable scientific evidence, crucial for addressing issues like climate change, public health, and technological innovation. And it enables the public to distinguish credible scientific information from misleading or false claims, thus promoting informed decision-making and safeguarding against the potential harms of pseudoscientific practices. Demarcating science from pseudoscience is therefore essential for maintaining societal trust in science, ensuring that the advancements and benefits of research are realized and appreciated.
A quick overview of the literature shows that most approaches to demarcation start with science, defining it by some characteristic or method, and then base their definition of pseudoscience on that. Science-first strategies are usually laborious because – as we shall see – it is quite demanding to come up with global, universal, and strict criteria for defining science. Recently, Ilmari Hirvonen and Janne Karisto (Reference Hirvonen and Karisto2022, 703–704) have identified six assumptions that characterize most science-first strategies about demarcating science from pseudoscience: (1) they define only science, and move to pseudoscience from there; (2) they look for necessary and sufficient conditions for their criteria; (3) their criteria apply universally to all sciences; (4) their criteria focus on the end products of science, instead of scientific processes themselves; (5) they often work with only a few criteria (most of them with only one); (6) their criteria refer to formal features of the logic of discovery.Footnote 10
Even a more sophisticated approach, such as Sven Ove Hansson’s, starts with a definition of science like the following:
Science (in the broad sense) is the practice that provides us with the most reliable (i.e., epistemically most warranted) statements that can be made, at the time being, on subject matter covered by the community of knowledge disciplines (i.e., on nature, ourselves as human beings, our societies, our physical constructions, and our thought constructions).
This definition emphasizes the reliability and epistemic warrant of scientific statements, highlighting the importance of empirical evidence and sound reasoning in scientific practice. Hansson (Reference Hansson2009, 238) poses the following question, what he calls “the puzzle” of demarcation: “How can there be so much agreement in particular issues of demarcation in spite of almost complete disagreement on the general criteria that these judgments should presumably be based upon?” Despite the widespread consensus on specific instances of what constitutes pseudoscience (such as astrology, homeopathy, or creationism), there is significant disagreement about the underlying criteria that should be used to make these particular judgments. According to Hansson, this paradox highlights the complexity and challenge of defining clear, universal criteria for distinguishing science from pseudoscience. Based on the definition of science earlier, he goes on to designate a statement as pseudoscientific if and only if it satisfies three criteria:
1. It pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain).
2. It suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability).
3. It is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter (the criterion of deviant doctrine). (Hansson Reference Hansson, Pigliucci and Boudry2013, 70–71)
The second point, the notion of epistemic warrant, is crucial in Hansson’s criteria for demarcating pseudoscience. Epistemic warrant refers to the degree to which a belief or statement is justified by reliable evidence and sound reasoning within the context of the available knowledge at the time. This concept underpins his three-part criteria for identifying pseudoscience: A claim is pseudoscientific if it pertains to scientific domains, lacks epistemic warrant, and is promoted as though it possesses such warrant. Hansson’s criteria address the demarcation problem by emphasizing the quality and justification of knowledge claims, rather than merely their subject matter or the intentions of their proponents.
Maarten Boudry has developed an approach that goes in the opposite direction. He criticizes the notion of epistemic warrant by arguing that it is too general and fails to capture the vast diversity and varying degrees of epistemic shortcomings present in pseudoscientific doctrines: “If we have no general account of epistemic warrant in the sciences, then a fortiori we can have no such account for the absence of epistemic warrant” (Boudry Reference Boudry2022, 96). While Hansson provides a structured framework for distinguishing pseudoscience based on the quality of evidence and reasoning, Boudry proposes focusing on the deceptive strategies that pseudoscience uses to appear legitimate, advocating for a more context-dependent and flexible approach to demarcation. He calls for a more flexible and empirical analysis, reflecting the complexities and varieties of pseudoscientific practices, rather than a one-size-fits-all solution to demarcation (see further Section 4).
2 Popper’s Falsification as Demarcation
What are the criteria that can help us separate science from pseudoscience, and from everything else for that matter? To this day, the most important attempt to draw a sharp demarcation line was made by the Austrian philosopher Sir Karl Popper. This notorious proposal first appeared in his 1934 book, Logik der Forschung, and resurfaced again in a new context in the 1959 revised English translation, The Logic of Scientific Discovery, as well as in later works, such as Conjectures and Refutations.
2.1 Popper’s Idea
When Popper (Reference Popper1963/1969, 255) first thought of the “demarcation problem” in 1919, it concerned the line between empirical-scientific statements and those of metaphysics. This question did not really change in Erkenntnis (Popper Reference Popper and Popper1933/2002, 314) or later, in 1934, when Logik der Forschung appeared. In those works, he posed questions regarding “the barriers which separate science from metaphysical speculation” to identify “a suitable distinguishing mark of the empirical, non-metaphysical, character of a theoretical system” (Reference Popper1959/2002, 11, original emphasis), or a “criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the other” (Reference Popper1959/2002, 11).
Consequently, Popper’s original project was an epistemological one that originated with Hume and Kant (Popper Reference Popper1959/2002, 11), and took center stage with the logical positivists, who proposed a detailed criticism of metaphysics and thus its demarcation from empirical science. The logical positivists in Vienna presented an inductive method for separating empirical science from metaphysics, namely by inferring from individual statements (representing individual observations, like “This swan is white”) to general statements (“All swans are white”). By deriving general statements from empirical observations, theories were confirmed inductively, which was not possible in the case of metaphysical statements, due to their nonempirical character (no individual observations were relevant to their truth).
As all statements were either analytic or synthetic for the positivists, the principle of induction that enabled the move from particular to general or universal statements was itself either analytic or synthetic. But Popper noted that if the principle were logical, then one could question the informativeness of such inductive inferences, as logical inferences are devoid of new empirical content (and there were no systematically worked-out inductive logics in the 1930s to capture that new content logically). On the other hand, if one takes the principle of induction to be synthetic, Popper (Reference Popper1959/2002, 5) asked why “such a principle should be accepted at all, and how we can justify its acceptance on rational grounds,” to which perhaps an inductively motivated reply would be the best, but circular solution at first glance.
To overcome the problem of inductively demarcating empirical science from metaphysics, thus, Popper (Reference Popper1959/2002, 7) suggested a new approach, called “the deductive method of testing.” After distinguishing separate means of testing scientific theories (like testing inner consistency, checking the empirical and tautological character of the statements involved, or comparing different theories to see whether new ones indeed represent genuine progress), Popper (Reference Popper1959/2002, 9) focused on the fourth option, namely, on “testing of the theory by way of empirical applications of the conclusions which can be derived from it.” By making predictions, scientists are coming up with such statements that could be checked and compared with the results of experimentation. If one finds the prediction adequate (what happened conforms to what the prediction, derived from the theory, said will happen), then “the theory has, for the time being, passed its test: no reason has been found to discard it. But if the decision is negative,” and these provide obviously the most interesting cases for historians and philosophers, “if the conclusions have been falsified, then their falsification also falsifies the theory from which they were logically deduced” (Reference Popper1959/2002, 10).
Popper contrasted falsification with the positivists’ verificationist program. While the latter aimed to arrive, inductively, at general statements from individual observations, Popper (Reference Popper1959/2002, 18) suggested the criterion of testability, specifically through predictions that can be falsified: “it must be possible for an empirical scientific system to be refuted by experience” (emphasis in the original). While universal statements (hallmarks of scientific theory that express laws and regularities) cannot be verified through individual experiences, they can be falsified. Thus, science is marked by exposure to falsification “in every conceivable way” (Popper Reference Popper1959/2002, 20).
Popper considered events such as the confirmation of Einstein’s theory of relativity via Arthur Eddington’s famous sun eclipse experiment to be excellent demonstrations of how science works (Kennefick Reference Kennefick2021): It follows from Einstein’s theory that massive objects such as stars, due to their gravitational attraction, bend space-time around them, thereby also causing the path of light traveling in a straight line between the observer and the light source to deflect. Einstein’s theory enabled scientists to generate predictions about the extent of this deflection. That is why, in 1919, British astronomer Sir Arthur Eddington organized an expedition to the west coast of Africa during a total solar eclipse to compare Einstein’s predictions with empirical observations. An eclipse was necessary because the stars behind the crown of the Sun cannot be observed on account of its brightness; however, during a total solar eclipse, these stars become visible, and the curvature of the light’s path becomes measurable. The observations of the 1919 expeditions were consistent with the theory of relativity: The Sun did indeed bend the path of light from more distant stars to the extent predicted by Einstein.
Thus, according to Popper, through hypotheses, predictions are made that can be compared to our actual observations – and if things do not work out, the hypotheses are falsified (Popper called these events “experimentum crucis,” crucial experiments that are able to decide the fate of a given theory). Not everything can be falsified, obviously. Works of art or literature cannot be falsified, but your favorite Harry Potter novel or True Detective episode does not even claim to be considered scientific in the grand sense. Problems arise when a certain idea pretends to be scientific, while its relevant claims cannot be falsified experientially, as there are no observations, measurements, or experiments that would count as falsifications of or counterexamples to the theory. To be scientific, says Popper (Reference Popper and Schilpp1974b, 981), there must be “the risk of being tested, and refuted; the risk of clashing with reality.”
Popper’s famous and notorious examples about how falsification works were Marxism and psychoanalysis, as practiced by his Viennese contemporary Sigmund Freud. Freud’s theory was often described as another Copernican revolution, concerning the study of the human mind, aimed at uncovering the unconscious and completely reshaping our understanding of how the psyche works. The problem being, according to Popper, that the elucidations of psychoanalysts are basically vacant, as they could “explain” any kind of human behavior by referring to some unconscious motives. Generally, these explanations involve some kind of sexual frustration, repression, or abuses dating back to early childhood that cannot be directly accessed, resulting in merely circumstantial evidence and contradictory consequences. This allowed psychoanalysts to evade the justificatory process of experience and thus the possibility of falsification.
While predictions of true science can be risky, as they may be falsified by empirical results, Freud’s hypotheses avoided this danger zone, experientially speaking. They explain different behavior using very similar ideas drawn from the unconscious, not leaving much, or for that matter, any space for possible counterexamples from someone’s personal history. And if new memories are uncovered from the past, a psychoanalyst would just consider them to be another example of the behavior in question, thereby justifying the theory. Thus, while psychoanalysts act like genuine scientists, Popper thought that psychoanalysis was a paradigmatic example of pseudoscience because its theories are basically unverifiable – there are no circumstances that would count as falsifying the theory, since all examples could be used as confirmation either way. Or, as Popper (Reference Popper1957/1969, 37) put it, “there was no conceivable human behavior which would contradict them” (though the theory might be still useful; see also the beginning of Section 5).
Marxism was a somewhat different issue. Marx’s original theory of history had something to say about when and at which point in industrial and economic development a revolution of the proletariat would happen (Marx and Engels thought that the revolution would take place somewhere where capitalism was fully developed, such as in Western Europe). This was a testable prediction, and it was falsified, as no revolution occurred in a mature capitalist system (only in Soviet Russia, where capitalism was less developed, if at all). Because the Marxist theory was falsifiable, it was a full-blooded scientific theory. “Yet instead of accepting the refutations,” wrote Popper (Reference Popper1957/1969, 37), “the followers of Marx re-interpreted both the theory and the evidence in order to make them agree.” The theory was rescued and maintained by its followers ever since, but “they did so at the price of adopting a device which made it irrefutable.” Hence, whereas psychoanalysis was unfalsifiable from the very beginning, Marxism was a falsifiable and, in fact, a falsified scientific theory, but became “a metaphysical dream … married to a cruel reality,” said Popper (Reference Popper and Schilpp1974b, 985). While this marks a difference between them, it also points to an important feature of Popper’s project of demarcation: pseudosciences are either those theories that are unfalsifiable (like psychoanalysis), or those that have been falsified and are still upheld by the community as scientific truths (like Marxist theory of history).
Falsification as demarcation had genuine appeal across scientific communities for decades. Even after its demise among philosophers (see Section 3), most scientists continued to endorse it as a conclusive result of scientific philosophy.Footnote 11
2.2 Falsification in the Courtroom: Evolution vs. Creationism
While the demarcation problem originally concerned science and metaphysics, it became associated with pseudoscience in 1953, when Popper (Reference Popper1957/1969, 33) noted retrospectively that his problem was to “distinguish between science and pseudo-science” (original emphasis). Logik der Forschung did not contain any mention of “pseudoscience” or any typical examples that people would now consider pseudoscientific (like parapsychology, or homeopathy, which were already around in the 1920s). Nonetheless, even a brief glance at the literature reveals that Popper’s concern with demarcation was regularly taken to be about pseudoscience and not just metaphysics.
One of the most notable instances where Popper’s philosophy was applied to challenge pseudoscience occurred in legal debates concerning creationism. In general, creationism conveys the view that God created the world; by this definition, all followers of Abrahamic religions – Jews, Christians, and Muslims alike – are creationists. In practice, however, the label usually describes more radical views: The most extreme stance is taken by the so-called “young Earth creationists,” who think that the story described in the Book of Genesis about the creation of the world is accurate as it is, meaning the whole cosmos, including the Earth and everything else, was created in six days about 6,000 years ago. A weaker standpoint claims that this should not be taken literally, and that “one day” does not mean 24 hours but a more extended period. These views about creation are widespread in the United States, especially in the politically conservative, strongly religious southern states known as the Bible Belt.
A branch of young Earth creationism that became influential in the 1970s and 80s, called scientific creationism or creation science, attempted to give the appearance of scientific credibility by using scientific language and selectively employing astronomical, geological, and paleontological data. It was taught at many private religious colleges, which have established research facilities, journals, think tanks, and even museums for the interested public. Proponents of scientific creationism, such as Henry M. Morris and Duane Gish, have sought to magnify debates within science on details of evolution, and to position “creation science” as a legitimate alternative to “evolutionary science” (their neologism) (Numbers Reference Numbers2006, 268–285).Footnote 12
The issue centers on what should be taught in taxpayer-funded public schools, raising broader concerns about the separation of state and religion and citizens’ freedom of conscience. From this angle, it becomes clear why creationists stress that they are promoting “science.” By framing Darwinism as “only a theory,” rather than a fact, and presenting creationism as a scientific concept rather than a religious one, they aim to position both as equally valid perspectives. This would require – under a purportedly neutral approach to public education – that both theories receive equal time and treatment, falsely implying that they hold comparable scientific credibility.
During the twentieth century, the representatives of creationism had a chance to assert their educational policy ideas in numerous lawsuits. One of the most famous of these was the 1981 case of McLean v. Arkansas, which challenged the state’s Equal Time Act 590 on the grounds that it violated the US Constitution’s First Amendment about the separation of church and state. The stakes were high: can creation science be considered a scientific idea or not. If it is, then it can be taught alongside evolutionary theory. Famous experts such as the paleontologist Gould, evolutionary biologist Francisco Ayala, and philosopher of science Michael Ruse testified in the legal proceedings, receiving significant media coverage. There, talking about the nature of science, Ruse emphasized the search for unchanging natural regularities (against the supernatural elements of creation science), the ability to explain via laws, being testable and falsifiable and tentative. When asked to talk about falsifiability, Ruse relied on Popper’s name and idea, testifying that “unless something was subject to falsification, it was not scientific” (in Ruse Reference Ruse1996, 302). At the end of the trial, Judge William Overton quoted from Ruse’s testimony, noting that creation science is not science because of the lack of the mentioned scientific feature (in Ruse Reference Ruse1996, 318–319); namely, it is not falsifiable, thus appears as a religious idea, so it cannot be taught in public schools. At the end of the day, Popper and Popperian philosophers of science seemed to save public education and the prestige of science against religion and pseudoscience.Footnote 13
Somewhat ironically, defenders of creationism frequently refer to Popper in arguing that no scientific theory – evolution included – can be finalized and confirmed once and for all. In the words of the Gish (Reference Gish1995, 6), “the architects of the modern synthetic theory of evolution have so skillfully constructed their theory that it is not capable of falsification. The theory is so plastic that it is capable of explaining anything.” It is worth mentioning that Popper (Reference Popper1974a) also unintentionally aided the creationists when he argued in the early 1970s that Darwinism is a “metaphysical research program.” What was the basis for this claim? Known as the tautology problem, the argument is that natural selection is not based on empirical observation but a priori reasoning, given that the “survival of the fittest” is a tautology (Popper Reference Popper1972, 241–242). Who survives? The fittest. And who are the fittest? Those who survive. Thus, the principle of natural selection says nothing about the empirical world and is consequently untestable and ultimately unscientific, like psychoanalysis in being immune to refutation. Of course, this is not to say that Popper became anti-evolutionary; his own epistemological approach was also grounded in evolutionary principles (see Hull Reference Hull1999; Elgin and Sober Reference Elgin and Sober2017).
Although Popper (Reference Popper1978, 339–355) had revised his earlier statements by the end of the 1970s, and philosophers of biology had provided a satisfactory solution to the tautology problem (Ruse Reference Ruse1977),Footnote 14 creationists eagerly used Popper’s former views to legitimize their own. If the theory of evolution is not scientific according to Popperian criteria,Footnote 15 yet it is still taught in schools, then on what basis should creationism not be part of the curriculum? Therefore, the rhetorical function of falsificationism is twofold: It both challenges the theory of evolution and legitimizes creationism.
2.3 Initial Critiques of Falsificationism
Not that there was nothing to criticize within the falsificationist idea itself. Does science indeed work the way Popper described? Do scientists give up a theory after one instance of falsification (after an experimentum crucis), or do they try to tweak their theories and experiments to accommodate the new data?
One of the early critics of Popper’s demarcation within the philosophy of science was Thomas Kuhn (Reference Kuhn1962, 77), who argued that “no process yet disclosed by the historical study of scientific development at all resembles the methodological stereotype of falsification.” Both Kuhn (Reference Kuhn, Lakatos and Musgrave1970, 4) and Popper shared an interest in the history of science and in “the way science has actually been practiced,” though they diverged in the conclusions they drew from historical data. Popper focused on moments of revolution and crucial experiments, offering an account of how science works and what characterizes it (like testability and falsification, either to explore the limitations of a theory or to maximally “strain” them (Kuhn Reference Kuhn, Lakatos and Musgrave1970, 5)). Kuhn thought that “episodes like these are very rare in the development of science,” and that what characterizes science are its “normal” periods. “Normal science” consists in the usual work of a scientist who accepts a given paradigm and aims to solve its puzzles, following all the explicit and tacit norms and routines of the paradigm into which she was socialized. In normal science, there are no revolutions, no major tests, or crucial experiments, and when there is a failure, it is the researchers’ fault (as they were unable to solve a puzzle) and not the theory’s (the rightness of the paradigm prevails). If someone is unable to find a solution in due time, then an anomaly will emerge that could later lead to a crisis and to a revolution, but that is a long process (Kuhn Reference Kuhn1962).
Since, according to Kuhn (Reference Kuhn, Lakatos and Musgrave1970, 6), falsification does not occur regularly in the sciences, Popper “has characterized the entire scientific enterprise in terms that apply only to its occasional revolutionary parts.”Footnote 16 “If a demarcation criterion exists,” concluded Kuhn (Reference Kuhn, Lakatos and Musgrave1970, 6), “it may lie just in that part of science which Sir Karl ignores,” though he hastened to add that “we must not, I think, seek a sharp or decisive” criterion. To be scientific means to have a shared paradigm (that is why Kuhn (Reference Kuhn1962, 15) said that the social sciences are still in a “pre-paradigmatic” stage), to pursue normal scientific work, and to solve paradigm-induced internal puzzles; “to turn Sir Karl’s view on its head, it is precisely the abandonment of critical discourse that marks the transition to a science” (Reference Kuhn, Lakatos and Musgrave1970, 6).Footnote 17
This applies to pseudoscience as follows. When Kuhn (Reference Kuhn, Lakatos and Musgrave1970, 8) discussed the case of astrology, he noted that while Popper rejected astrology because its followers make statements so vague that no falsification is possible, astrologers actually made “many predictions that categorically failed,” and thus their unscientific character is not due to the fact that they do not make falsifiable predictions. Popper might argue that even though their predictions are falsified, the adherents of astrology explain away every counterexample and failure to test them. Kuhn argued, however, following the works of historians of science like Lynn Thorndike, that astrologers have good reasons to explain their failures, since astrological predictions are, for example, much too complex and too sensitive to the (biographical) data given very often erroneously by people. “There was nothing unscientific about the astrologer’s explanation of failure” (Kuhn Reference Kuhn, Lakatos and Musgrave1970, 8). Nevertheless, astrology was not a science for Kuhn (Reference Kuhn, Lakatos and Musgrave1970, 9), mainly because “there was never an astrological equivalent of the puzzle-solving astronomical tradition.”
Another critic of both Popper and Kuhn was the internationally renowned Hungarian philosopher of science Imre Lakatos. He argued that instead of talking about individual hypotheses or theories, and their crucial tests, we should divide scientific theories into a so-called “hard core,” that is, the basic tenets or ideas that are treated as irrefutable by fiat, and a “protective belt of auxiliary hypotheses” that are under constant change and revision (see Lakatos Reference Lakatos, Lakatos and Musgrave1970).
Take his famous example of a partially imagined scenario regarding Newtonian gravitational theory (Lakatos Reference Lakatos1968, 169–170). Newton’s mechanics and the law of gravitation are the hard core that is surrounded by other issues (the protective belt), such as the initial condition of the planetary system and various observational theories, including those concerning the experimental machinery. When scientists used Newton’s celestial mechanics to predict the position of Uranus in the sky, they found that the actual position deviated slightly from that predicted by the theory. They then had to reconcile their observations with the hard core of the theory because they were reluctant to give it up. Consequently, they started to propose various alternative hypotheses for the initial conditions. (As Lakatos (Reference Lakatos, Worrall and Currie1974/1978, 149–150) wrote, “Nature may shout NO, but human ingenuity – contrary to Weyl and Popper – may always be able to shout louder.”) They posited that there was a hitherto unknown planet whose gravitational force affected the movement of the planet in question. Using a new experimental setting, scientists looked for that new planet to (dis)prove the idea, leaving the theory’s hard core untouched. This was how Urbain Le Verrier explained some deviations in Uranus’ movements: he suggested the existence of another planet (Neptune), which was confirmed in 1846. But if no such planet were to be found, still further revisions could be introduced to the auxiliary hypotheses. To explain disturbances in the motion of Mercury as well, and thus to save the Newtonian theory, Le Verrier has introduced another planet, called Vulcan, but that planet never materialized. Scientists can go on and on and come up with considerations for how lenses, optics, and telescopes could be adjusted, which would have no bearing on the fundamental ideas of planetary moves.
Similar to Kuhn, and unlike Popper, Lakatos accepted the important role of anomalies and did not conceive them as elements that would necessarily falsify a theory. He noted that “This methodological attitude of treating as anomalies what Popper would regard as counterexamples is commonly accepted by the best scientists. Some of the research programmes now held in highest esteem by the scientific community progressed in an ocean of anomalies” (Lakatos Reference Lakatos, Worrall and Currie1974/1978, 147). A negative instance, a counterexample, or a failed test does not discount or falsify anything, but they are often turned into a case, an example of the theory. Lakatos thought that theories and their variations (a solid hard core with constantly changing auxiliary hypotheses), that is, sequences of theories, add up to research programs, and it is rational to stick to such a program until it has been superseded.
For Lakatos, the demarcation line therefore fell between good (progressive) and bad (degenerative) research programs. A research program is theoretically progressive when a new theory has more empirical content than the previous one, that is, “if it predicts some novel, hitherto unexpected fact” (Reference Lakatos, Lakatos and Musgrave1970, 118). Besides that, a program is called empirically progressive when this new empirical content becomes corroborated, that is, if the new theory “leads us to the actual discovery of some new fact” (Reference Lakatos, Lakatos and Musgrave1970, 118, original emphasis). A research program is called degenerative if it is neither theoretically or empirically progressive (Reference Lakatos, Lakatos and Musgrave1970, 118).Footnote 18 This idea was quite flexible, in contrast to Popper’s: Research programs could tolerate falsification (falsifiability is not equated with being scientific) and even occasional inconsistencies by putting them into “quarantine” for a while (Lakatos Reference Lakatos, Lakatos and Musgrave1970, 143).
Although Lakatos thought that the hard cores of theories are immune to falsification, during the twentieth century, philosophers of science (like Rudolf Carnap, Otto Neurath, and W.V.O. Quine) came up with the idea that nothing is immune to revision, and that even logic and mathematics, the tools to make predictions and deductions, could also be modified, though this rarely happens. Any parts of the predictive/experimental process can be replaced, retained or modified, and much of the debate within philosophy of science in the 1970s and 1980s focused on the rationality of scientific development, namely, where to introduce modifications, up to which point it is rational to stick to an idea in the face of falsification, and so on (see Rowbottom Reference Rowbottom2023). Lakatos said, for example, that there is no falsification “before the emergence of a better theory” (Reference Lakatos, Lakatos and Musgrave1970, 119), and one is entitled to stick to the old one until it is “at least theoretically progressive” (Reference Lakatos, Lakatos and Musgrave1970, 119). Thus, even if pseudoscientists are able to immunize their theories from anomalies, counterexamples and problems, they do not generate new empirical content (i.e., predictions about new and unexpected facts).Footnote 19 Although the rationality debate is not directly related to the issue of pseudoscience, it still involves demarcation, since in many cases, we find pseudoscientists clinging to old, refuted ideas, for which they are condemned by representatives of the mainstream (see, for example, Gordin Reference Gordin2023).
Recently, Sabine Hossenfelder has argued that fundamental theories of physics, especially string theory, have suffered major setbacks in the last three decades, as experimental confirmations (or even disconfirmations!) have become rare, if not nonexistent. Nonetheless, defenders of the fundamental doctrines are still advocating their pet theories, very often for aesthetic values (such as their supposed elegance). After a while, “facts get sparse and your onward journey is blocked by theoreticians arguing whose theory is prettier” (Hossenfelder Reference Hossenfelder2018, 20). This would seemingly constitute a typical degenerative research program in Lakatos’ sense, although the experimental set-ups of particle physics may also have important engineering and other practical offshoots. How long can one cling to a theory if actual, concrete confirmations (or disconfirmations, for that matter) are sparse? And what radical changes need to be introduced, turning a whole respected field upside down, to (re)initiate progress?
2.4 Taking Philosophical Stock
These are important and substantial philosophical questions that are, by virtue of their philosophical nature, not settled. And by being open to challenges, suggestions, personal opinions, or biased preconceptions, they act as motivation for the pseudoscientific camp, who use this space of revisions for their own merit. How many auxiliary hypotheses have to be or should be introduced to provide explanations for all the emerging (dis)confirming evidence? Is it a logical, or rather, as many philosophers and historians of science would think, a pragmatic decision? If “the imaginative scientist [can] save his pet theory by suitable lucky alterations in some odd corner of the theoretical maze,” and there is no “mechanical, semi-mechanical or at least fast-acting method of showing up falsehood” (Lakatos Reference Lakatos, Worrall and Currie1974/1978, 147, 149), the practice of pseudoscientists to immunize themselves from criticism is encouraged.Footnote 20
It is also far from settled how to understand falsification exactly. Kirsten Walsh (Reference Walsh2009, 29) has distinguished three “distinct principles” in Popper’s view that she calls “Falsificationism.”
1. Logical falsifiability: A theory must make predictions that could be proven false;
2. Methodological falsifiability: Proponents of a logically falsifiable theory must be willing to test those predictions, and reject the theory if they prove to be false;
3. Corroboration: A theory that has survived many attempts at falsification is “corroborated,” and it is rational to prefer corroborated theories to theories that are uncorroborated.
In the sense of (1), most pseudoscientific theories are perhaps scientific after all, in that they have testable hypotheses, while some supposedly scientific theories are not (see the examples in the next section). But it is a further question how one stands to a falsified theory; if its advocates do not give it up in the face of counterexamples and disconfirming data, then they end up on the side of pseudoscience (as was the case with the falsified theory of Marxism). If a theory survives the test, then it is corroborated, and “it is rational to prefer corroborated theories to theories that are uncorroborated” (Walsh Reference Walsh2009, 29). Walsh thinks that Laudan (and one might add that many others as well), “focuses only on (1), and seems not to recognize that (2) and (3) are just as crucial to Falsificationism” and to Popper, after all.
As we have seen, Popper thought, or at least he is usually interpreted to have thought, that a crucial experiment could decide the fate of a theory: If it is disproven, it becomes falsified, and thus it is not rational from a scientific point of view to adhere to it. But others, like Kuhn and Lakatos, argued that commitment to a theory does not work that way. This naturally leads to the following question: Until which point is it rational to adhere to a disconfirmed theory? Lakatos’ suggestion about progressiveness provided a significant and substantial answer, but given its internal problems (see, for example, Radder Reference Radder1982), the Lakatosian idea of a research program has not spread among philosophers and historians of science. Even if a program (or sequence of theories) is theoretically progressive, and thus scientific, it is not guaranteed that it will be empirically progressive as well, and there are no algorithmic ways to calculate the chances of theories. There are scientific theories with existential import that were not confirmed empirically for decades (think of gravitational waves or the Higgs boson), and confidence in them did depend on many factors besides their alleged truth and occasional progressivity.
In the history of the demarcation problem, none of this really mattered, as it turns out that falsification and the larger project of searching for strict demarcation criteria are both nonstarters. And some scholars have thought so all along.
3 Laudan and the Demise of Demarcation
What are we to do with Popper’s idea of falsification as demarcation if science does not work as the ideal suggests? To put it more generally, is the demarcation program in need of some revision, or should it be rejected altogether? And, most importantly, is there any reason to uphold the search for a final, necessary, and sufficient condition that would demarcate all science(s) from all pseudoscience once and for all? Already some Popperians and quasi-Popperians, like Lakatos, recognized that Popper’s criterion needed significant adjustment to encompass the actual, historically, and sociologically embedded practices of scientists. Rooted as it was in philosophy, falsification as the unique feature of science thus started to fall apart, and by the time sociologists of science entered the scene in the 1970s, much bigger questions were at stake in scientific laboratories. Can and should philosophers demarcate science at all?
In an influential 1983 article, the American philosopher of science Larry Laudan (a famous debater of positivist, sociological, and relativist studies) argued that the situation is, in fact, worse – there is no need to revise falsification, or the project of demarcation, since all of it should be consigned to the dustbin of history. Laudan’s paper, titled “The Demise of the Demarcation Problem,” deserves our close attention; in it, he offers a general, supposedly fundamental and fatal critique of the demarcation question.
What then are the characteristic features of the demarcation criterion that is supposed to divide science from pseudoscience? Laudan (Reference Laudan, Cohen and Laudan1983, 117) breaks down the problem into three sub-questions, also reflecting on the metaphilosophical stakes of the problem: (1) What conditions of adequacy should a proposed demarcation criterion satisfy? (2) Is the criterion under consideration necessary or sufficient, or necessary and sufficient for scientific status? Finally, (3) What actions or judgments are implied by the claim that a certain belief or activity is “scientific” or “unscientific”? Let’s take a closer look.
3.1 Conditions of Adequacy for Demarcation
When considering the demarcation criterion, one has to rely on received norms and cannot always go completely against expectations and established procedures. If there is a well-confirmed scientific practice, philosophers should accommodate the common view. Few people would find a criterion acceptable or enlightening that does not count particle physics and germ theory among the sciences; and similarly, if flat-Earth theory, psychic phenomena, the orgone, Lysenkoism, feng shui, or ufology were to end up on the list of sciences, one would immediately search for the source of the philosophers’ biased mistake. If an armchair-bound philosopher were to come up with an idea that defines Hamiltonian mechanics as pseudoscience (despite all previous confirmations) and in turn elevated magnet therapy and psychic surgery to the status of science, it would be surprising if any physicists were kicked out of the university for practicing pseudoscience. (Note, however, that Popper, as we have seen in Section 2.2, had a considerable influence in the courtroom, and thus on education and public policy.) As Laudan (Reference Laudan, Cohen and Laudan1983, 117) notes, one must “render explicit those shared but largely implicit sorting mechanisms whereby most of us can agree about paradigmatic cases of the scientific and non-scientific.”
This is obviously just a guiding principle, a rule of thumb, not a strict dividing line. If physicists turned out to find evidence (whatever that might be) against quantum field theory, they would be able to disqualify it from the scientific theories. A number of practitioners (like defenders of the old theory) would probably oppose this change, and many would certainly dispute the new take on the subject (requiring more evidence, or simply caricaturing the new one as unsupported yet). But the point is not that major and fundamental scientific theories cannot end up among the pseudosciences, and vice versa, but that in searching for a workable criterion, our only hope, at least at first, is the “existing pattern of usage” (Laudan Reference Laudan, Cohen and Laudan1983, 117). But that is exactly the problem for many: even if we are able to come up with a mainly consensual list of what falls on the scientific and what is on the pseudoscientific side of the debate, the most interesting cases – at least from a philosophical point of view – are those which can change categories or undecided yet. It is quite well-documented how so-called fringe theories (often called simply as pseudosciences by many) ended up in the mainstream after a while. Though meteorites, the giant squid, milky seas, germ theory, or the Garcia-effect are consensually, and warranty accepted nowadays, things were rather different a few decades or a century ago (see Gradowski (ms.) about these examples). There is thus a well-established and historically proven way from fringe to the mainstream.
To give another example: As children in the early 1990s, we learned in school that the surface of the Earth is the product of moving tectonic plates, stemming from the idea that all continents drifted away from a single original landmass. Nonetheless, just three decades before that, “plate tectonics” or “continental drift” were still much debated and even ridiculed ideas (originating in the 1910s with Alfred Wegener), not accepted consensually by the community of respected geographers. It was a fringe theory until in the 1960s the scientific landscape changed and by gathering more evidence and changing the attitude of geographers about the theory/empirical evidence question, continental drift slowly became the mainstream view. Scientific theories come and go, and what counts as paradigmatic – plate tectonics is one such theory now – is also relative to time and place, and not just to method (see Oreskes Reference Oreskes1999). The problem is often caused by those interesting cases in the here and now that are on the fringe (e.g., certain parts of cryptozoology that were scientifically rejected, but later accepted (France Reference France2020); entities shown first only by traditional ecological knowledge, but later found and recognized by mainstream science (Rossi, Gippoliti, and Angelici Reference Rossi, Gippoliti and Maria Angelici2018); or specific interpretations and developments in quantum mechanics that were originally ridiculed and rejected, but later lead to significant physical discoveries and insights (Kaiser Reference Kaiser2011). They are not obviously pseudoscientific but are nonetheless considered fringe by many for a reason. For example, they violate certain basic rules of science or ethics; or they lack something, be it scientific rigor, objectivity, systematicity, or integrity, and that makes them suspicious. This raises the question: How do we make up our mind about them?
However, for philosophy of science, continues Laudan, it is not enough to come up with a quasi-consensual list, leaving all the dubious cases in the waiting room (or in Lakatos’ quarantine). Even if one can separate many obviously scientific ideas from conventionally pseudoscientific ones, what one needs to do as a philosopher is to identify epistemic or methodological features that back up this distinction – put differently, philosophers require principled reasons, not sociological descriptions of received patterns. A “surer epistemic warrant” is needed to ensure that this project is even of minimal philosophical interest, and if nothing like that can be secured, then “the demarcation between science and non-science would turn out to be of little or no philosophic significance” (Laudan Reference Laudan, Cohen and Laudan1983, 118). This does not mean, as a matter of fact, that demarcation would not be important from an institutional, sociological, or even psychological point of view. Although philosophers are revisiting the problem, it has been the natural and medical scientists, along with social psychologists, who have devoted themselves to studying pseudoscience in recent decades. The question thus remains: What can philosophers contribute?
3.2 Necessary Conditions of Demarcation
From a more formal perspective, it also needs to be determined whether the criterion, or criteria, should be necessary or sufficient, or both. The ideal outcome would be necessary and sufficient conditions, as this would give us the strongest possible definition (something is a science, if and only if … .). Laudan is pessimistic about such strictness. He reviewed various demarcation projects from Aristotle to Mach and found that none of them were able to achieve a systematic consensus among philosophers and scientists. There was quite a significant divergence of views, and the absence of a plausible and enduring candidate for the demarcation principle led Laudan (Reference Laudan, Cohen and Laudan1983, 211–215, 218–222) to conclude that there could not be any.Footnote 21 Any necessary and sufficient condition would be a genuinely unique feature and would sharply and unambiguously divide science from pseudoscience. However, as the sciences are undergoing constant change and development, and have their own gray zones (which usually turn out, after some time, to be either valid or invalid), such unique features would not do the job adequately. Although he does not spend much time discussing the heterogeneity of the sciences (which would back up his pessimistic induction), Laudan recurrently hints at the fact that a necessary and sufficient condition would violate the diversity of what we call science.
Therefore, Laudan spends more time on other possibilities. Let’s imagine a necessary but not sufficient condition. For example, to develop a viral infection, it is necessary to get into contact with the virus, but it is not sufficient because further conditions must apply, for example, being nonresistant and having a weaker immune system. In our context, given a certain criterion and a certain theory, if the theory is scientific, then it necessarily has the characteristics specified by the criterion, and if the theory does not have it, then the theory is not scientific. With such a criterion, one would definitely know if something were unscientific (it does not have a necessary characteristic that all sciences share), but in itself, the criterion does not tell us whether all the things that meet the necessary condition are actually scientific or not.
Let’s assume that making testable and thus falsifiable statements is a necessary but not sufficient condition of being scientific, that is, only those theories can be scientific that make testable statements. Flat-Earth theory and Erich von Däniken’s conception of ancient astronauts – the view that aliens played a major role in the ancient history of the Earth, and that this can be proven by historical and geological evidence – satisfy this necessary criterion. Both make claims about the empirical world, either about the actual surface and shape of the Earth or about the origins of enormous and mysterious structures (like the pyramids), and very often they indeed address empirical and experimental questions, contrary to most religious theories, which make only supernatural (i.e., nonempirical, non-testable) claims (almost per definition). Nonetheless, neither flat-Earth theory nor ancient astronauts (or the idea of ancient aliens) count among the sciences.
Or to give a more recent example, take systematicity: Paul Hoyningen-Huene (Reference Hoyningen-Huene2013, 14) has argued recently that “scientific knowledge differs from other kinds of knowledge, in particular from everyday knowledge, primarily by being more systematic.” It is admittedly hard to define “systematicity,” especially in the context of science. Surely it has an epistemic, knowledge-oriented component, otherwise the systematic collection of stamps or systematic loading of the dishwasher would count. But the production of knowledge does not suffice, because a teacher counting systematically all the children after a picnic aims at knowing the number of students, though hardly would anyone consider it as a scientific practice (Hoyningen-Huene Reference Hoyningen-Huene2019, 908, 909). Hoyningen-Huene goes over several dimension of scientific knowledge (like the descriptions it provides; the explanations and predictions it makes, the ways it defends knowledge claims, engages in critical discourse, exemplifies epistemic connectedness, approaches the ideal of completeness, generates and finally represents knowledge), and argues that what is usually deemed as science, shows a greater degree of systematicity in all or some of these features than everyday knowledge.
Hoyningen-Huene (Reference Hoyningen-Huene2019) admitted that he was quite ambiguous in his book on the topic about the status of systematicity as a criterion. If taken as a necessary feature of science, it is quite evident that there are many systematic things that are obviously not scientific, although they aim at knowledge in a specific way.Footnote 22 Naomi Oreskes has shown, for example, that in certain cases, fields of inquiry that are consensually taken to be pseudoscientific still manage to be very systematic. Her examples are homeopathy, creation science, and climate change denialism. Advocates of all three fields publish in peer-reviewed journals, construct arguments based on skeptical premises, collect evidence and evaluate it, and come up with hypotheses. These are organized forms of research, with textbooks, educational syllabi, conferences, proceedings, and research facilities. Even climate change denialism “lays out an argument, offers evidence for it, provides counter-arguments to mainstream science, and supports its claims with tables, graphs, and illustrations” (Oreskes Reference Oreskes2019, 897). In short, all of them are very systematic in a sense, and surely more systematic than everyday knowledge-seeking practices. Nonetheless, all three are typical examples of pseudoscientific endeavors.Footnote 23
As we saw, there are indeed many things that feature the necessary criteria but are not scientific. But things can get even worse, as one could find examples that are not systematic under certain renderings of the concept, but still would count as scientific. Take the example of speculation that is anything but a systematic form of scientific research. In his recent book on speculation in science, Peter Achinstein has identified three traditional ways on the role of speculation that he described with the following catchy phrases: “don’t speculate,” “speculate, but test,” and “speculate like mad even if you can’t test” (Reference Achinstein2019, xi). Speculation, a term that easily evades attempts to provide a precise definition, lies at the core of countless major events in the history of science. One could argue that speculation is reserved for the so-called “context of discovery” and what demarcates science from pseudoscience is exactly the possibility of systematically testing those speculations in the “context of justification” (“speculate, but test”). But if one accepts the blurring of the distinction of these “contexts” (see Schickore and Steinle Reference Schickore and Steinle2006), then it becomes much harder to separate speculation and testing. This means in our context that there would be scientific practices that are more speculative and less systematic (like the investigation of rare astronomical events, or the interpretative understanding of unique zoological extinctions), and indeed, systematicity might be not even necessary.Footnote 24
Perhaps there are more obvious examples of such cases where something does not meet the proposed necessary criterion, but nonetheless counts among the sciences, and again, for good reasons. Take testability: The most obvious example would be logic and mathematics, which are not directly testable, although they are substantially used within empirical theories that allow for precise predictions and the collection of evidence. In such cases, however, the mathematics between the prediction and the actual data is taken to be a confirmation of the empirical theory and not of logic or itself, which are deemed to be true in themselves. It would hardly make sense to consider logic and mathematics to be pseudoscientific just because they do not exemplify the necessary condition of being empirically testable, and thus falsifiable.
Two notes are in order here: First, Popper would not have any problems with this, presumably, as he wanted to demarcate mathematics and logic from the empirical sciences anyway. He wrote that the problem of demarcation was the “problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the other” (Reference Popper1959/2002, 11). This indicates that mathematics and logic do not fall on the side of empirical science, but it would be hard to assume that they were exactly like metaphysics or pseudoscience. Rather one might assume that mathematics and logic are analytic in the fashion of logical positivism, or they have a different function than empirical science, thus falsifiability does not even emerge in their case. And secondly, it is not as if mathematics were immune to pseudoscientific and fringe ideas, like squaring the circle or doubling the cube; Augustus De Morgan (Reference De Morgan1915, 338) even created the term “pseudomath” to describe those practices that elude different virtues of traditional mathematics. In the case of applied mathematical theories like statistics, social interests and agendas could shape significantly the development and form of mathematical theorizing (see Mackenzie Reference Mackenzie1981) and thus open the door to fraud, junk science, overinterpretation, and overly self-confident misapplication.
In the case of climate science, the testability of models is a multidimensional, complex question. If one considers only the physical, biological, and chemical data at hand, one is moving on very uncertain and shaky grounds; but in the predictions of climate science, human agency (which is a social factor beyond the physical, biological, or chemical) is of greater importance, along with interpretations of probability and modes of modeling (see the studies in Lloyd and Winsberg Reference Winsberg2018). Climate science predictions can be tested up to a point and usually via models, but if one has in mind a complete and detailed actual test of theoretical predictions and empirical observations, then climate science is certainly far from meeting the criterion of testability in full.Footnote 25
But it is not just climate science that struggles with predictions and testing. String theory, one of the main concepts in physics, is not doing much better – in fact, it is in rather worse shape. According to Richard Dawid (Reference Dawid, Dardashti, Dawid and Thébault2019, 99), “Fundamental physics today faces the problem that empirical testing of its core hypotheses is very difficult to achieve and even more difficult to be made conclusive” (see also Dawid Reference Dawid2013). One could be forgiven for thinking that we are again witnessing a degenerative research program à la Lakatos, but what happens is that empirical confirmation and testability are often replaced by other theoretical virtues, such as contrastive-selective reasoning (a given theory faces fewer difficulties than its alternatives), and thus the notion of trust has become valorized. As Dawid (Reference Dawid, Dardashti, Dawid and Thébault2019, 100) notes, string theory is the leading physical approach now “and is trusted to a high degree by many of its exponents in the absence of either empirical confirmation or even a full understanding of what the theory amounts to.” Similar issues of trust and belief plague cosmic inflation, multiverse theories and many other ideas within the physical sciences. Giving credit to “non-empirical theory assessment” (based on trust, viability, and even aesthetics) “has turned from a fringe topic in physics into a question at the core of the field’s self-definition” (Dawid Reference Dawid, Dardashti, Dawid and Thébault2019, 100).
Another interesting example could be the case of the historical sciences, such as paleontology and geology (Currie and Turner Reference Currie and Turner2016). The problem, in a nutshell, is that historical scientists do not have direct, repeatable access to the events or entities they investigate (e.g., extinct species, geological events). Instead, they rely on indirect evidence – fossils, isotopic data, or traces of past climates – to make inferences about past phenomena, which cannot be observed or replicated in the present. This epistemic disadvantage is what Derek Turner (Reference Turner2007) calls the asymmetry of manipulability. For example, how can one know anything about the color of dinosaurs? Paleontologists must rely on highly incomplete traces, like fragments of fossilized skin or feathers, and use various advanced technologies based on various background assumptions (so-called midrange theories) to extract information and formulate their hypotheses.Footnote 26 According to Turner, the roles of these background theories are different in historical and experimental sciences. He summarizes this difference, what he calls the role asymmetry of background theories:
In historical science, background theories all too often tell us how historical processes destroy evidence over time, almost like a criminal removing potential clues from a crime scene. […] In experimental science, by contrast, background theories more often suggest ways of creating new empirical evidence.
As he concludes: “The asymmetry of manipulability and the role asymmetry of background theories really do place historical researchers at a relative epistemic disadvantage” (Turner Reference Turner2007, 6).
Similarly, in the context of social sciences like ethnography, it is hardly possible to infer the actual meaning and contemporary significance of partial archeological findings (like shreds of clothes with colorful figures and splinters of vessels with fragmentary paintings) dating back thousands of years. Although many of these motives may surface later in tales, folk songs, and proverbs that are only a few centuries old, the causal linkage between them is often absent, and there are hardly any bulletproof methods of validating historical data. Ethnographic analogies, often used to draw connections between ancient artifacts and more recent cultural practices, face significant challenges. This is because human societies and behaviors are “path-dependent, interdependent, and highly context-sensitive,” which makes straightforward analogical inferences problematic (Currie Reference Currie2016, 7).
3.3 A Sufficient Condition of Demarcation
Moving on, one could propose a sufficient (but not at all necessary) condition; in everyday life, it is sufficient for your car to break down if it runs out of gasoline, but it is not necessary at all since motorcars can break down for a variety of reasons even with a full tank of gasoline. In our context, if being testable is sufficient for being scientific, then whatever is testable is scientific. But meeting a sufficient condition would only tell us whether a theory is scientific, without being able to declare anything as unscientific, since there might be other ways of being a science. Not meeting a sufficient condition would leave us in “a kind of epistemic, twilight zone” (Laudan Reference Laudan, Cohen and Laudan1983, 119), unable to determine much about the future of the theory, and would certainly not help philosophers, scientists, policymakers, and the public in their project of demarcation.
And with conditions that are too weak, such as falsifiability, even larger problems loom. As Laudan famously put it, perhaps a bit overinterpreting Popper,
[Popper’s criterion] has the untoward consequence of countenancing as “scientific” every crank claim that makes ascertainably false assertions. Thus flat Earthers, biblical creationists, proponents of laetrile or orgone boxes, Uri Geller devotees, Bermuda Triangulators, circle squarers, Lysenkoists, charioteers of the gods, perpetuum mobile builders, Big Foot searchers, Loch Nessians, faith healers, polywater dabblers, Rosicrucians, the-world-is-about-to-enders, primal screamers, water diviners, magicians, and astrologers all turn out to be scientific on Popper’s criterion – just so long as they are prepared to indicate some observation, however improbable, which (if it came to pass) would cause them to change their minds.
If a theory makes any testable and falsifiable prediction, then it is scientific. In the 1930s, Wilhelm Reich purported to have developed a special artifact, a so-called orgone box. The box was supposed to enclose and contain a universal life force or special energy (the orgone) that had healing effects. By every indication, the theory and the box were nothing but nonsense. However, it seems to make the following falsifiable prediction: If you put yourself in an orgone box, you will experience a significant improvement in whatever health issue you were facing before, due to the immaterial, omnipresent, living orgone energy that it captures and strengthens. This is a promise and thus a prediction. If things turn out to be just as promised, then the prediction has been fulfilled, the theory has been tested, and scientific status has been earned. If things do not change at all, then the theory may be falsified, and thus also earn scientific status. Obviously, since orgone boxes do not work, not many people within the scientific community and even among the public would like to see them gain this status.
The problem with testability (and thus with falsifiability) is not only that any theory that is at least in principle testable, or has some degree of testability, becomes scientific, but also that under this criterion, the scientific status of theories is not “a matter of evidential support or belief-worthiness, for all sorts of ill-founded claims are testable and thus scientific” (Laudan Reference Laudan, Cohen and Laudan1983, 122). Testability lacks demarcation by a principled, well-established idea that yields proper and genuine reasons to opt for science instead of pseudoscience. For philosophers interested in more principled and reasoned ways of normatively settling basic problems – that is, what is reasonable to believe and what does not meet the best criteria for being knowledge – such a Popperian demarcation that declares basically anything to be scientific is worthless.
3.4 The Practical Context of Demarcation
Answering the third question, “What actions or judgments are implied by the claim that a certain belief or activity is ‘scientific’ or ‘unscientific’,” requires an examination of the labels that scientists use in the relevant debates. When people evaluate a theory and judge it to be (pseudo)scientific, they rarely do so for purely abstract, theoretical reasons (see Sections 1 and 4 for further assessments).
It is a trivial, but true, observation that science is a field of debates, and occasionally of battles. Scientists vehemently criticize each other’s theories and ideas, and as Popper argued, the essence of science is to test everything, many times, and with cruel honesty and detail – your own hypotheses included.Footnote 27 But while first-order scientific debates are conducted on the pages of scholarly journals and in conference rooms that are inaccessible to the public (very often simply due to the jargon used), the fights over the boundaries of science play out in public forums, with significant media coverage. The sociologist Thomas F. Gieryn (Reference Gieryn1999) has called the practice of scientists trying to convince the public about the (il)legitimacy of certain ideas boundary-work. These battles are primarily not theoretical, but have a broader dimension and impact: If someone is able to demonstrate the religious character of creation science, then it will be forbidden in the classroom; if someone can prove the ineffectiveness and dangers of a proposed medical practice (like homeopathy, bioelectromagnetic therapy, chiropractic or the rejection of vaccinations), then it will be banned from hospitals, and even legal procedures could be initiated against practicing swindlers and charlatans.
In the words of Laudan (Reference Laudan, Cohen and Laudan1983, 120), “the value-loaded character of the term ‘science’ (and its cognates) in our culture should make us realize that the labelling of a certain activity as ‘scientific’ or ‘unscientific’ has social and political ramifications which go well beyond the taxonomic task of sorting beliefs into two piles.”
Despite agreeing that creationism should not be taught in public schools, Laudan was critical of Ruse and Overton’s argument against it. If one accepts that falsifiability is the criterion of being scientific, he argues, then one must also accept that creationism is scientific, since it makes falsifiable, albeit – as most scientists would say – false claims. Laudan believes that according to this criterion, creationism is not pseudoscience, but bad science, and this is precisely what highlights the untenability of demarcation criteria and the meaninglessness of the demarcation problem itself. It is not surprising then that Popper is often cited by proponents of creationism when they argue that a scientific idea, such as the theory of evolution, can never be definitively proven (as noted in the previous section).
3.5 The Demise of Demarcation
The conclusion is evident for Laudan: He thinks that because of all the philosophical, methodological, historical, and other problems of demarcation, it should be abandoned once and for all. Closing his paper with a very strong verdict, Laudan (Reference Laudan, Cohen and Laudan1983, 125) notes that “If we would stand up and be counted on the side of reason, we ought to drop terms like ‘pseudo-science’ and ‘unscientific’ from our vocabulary; they are just hollow phrases which do only emotive work for us.” His conclusion about “the demise of demarcation” influenced and shaped the philosophy of science for decades. One survey among the 176 members of the American Philosophy of Science Association has shown that 89 percent thought that science does not have a universal demarcation criterion (Alters Reference Alters1997). It became the new received view among philosophers that while historical and sociological projects should continue to describe the boundaries of science as it has been conceived across space and time, normative and evaluative philosophical analysis does not have much to contribute. As Martin Mahner (Reference Mahner, Pigliucci and Boudry2013, 29) has written, summarizing the situation, “we find that the topic of demarcation has long gone out of fashion.”Footnote 28
If the history, philosophy, and sociology of science have shown that it is hardly possible to draw a strict and fundamental line between science and pseudoscience, one might ask, is there nothing else that can be done? The question also has practical consequences, beyond its theoretical piquancy – what kind of drugs should be supported and delivered to patients; what therapies can get public funding; what can be taught in the classroom; which theories and experts should be involved in governmental decision-making; who should be vaccinated and when; and should face masks be worn or not. Alas, this inability to clearly separate science that should be followed from charlatanry and hocus-pocus can have a major impact, for instance, on the body count during a pandemic.Footnote 29 But perhaps there are other ways for philosophers to pursue demarcation criteria in a principled and normative way, enabling them to guide anyone interested in knowing the difference.
4 Multi-criteria Demarcation
For a long time, Laudan’s diagnosis of the death of the demarcation problem seemed final. By the last quarter of the twentieth century, philosophers had lost interest in the question of the demarcation of science and the problem of pseudoscience in general. This does not mean, of course, that the question of pseudoscience as a public issue of practical relevance was no longer of interest; with the widespread use of the Internet and social media and new ecological and social problems, the visibility of science denialism and pseudoscience has arguably increased (Specter Reference Specter2009).
In social life and policymaking, many decisions necessitate considerations based not only on the epistemic reliability of a specific belief, but also on the tangible consequences involved, often measurable in human lives. Should the state endorse treatments like acupuncture, deemed unscientific by Western biomedicine? Can courts consider evidence provided by a psychic, for instance? Is it justifiable for the state to support research programs considered pseudoscience by the scientific community, such as parapsychology? Similar dilemmas arise on an individual level. Whom should I consult about my health issues – a doctor, a naturopath, or an aura healer? How should I invest my money? Should I buy a water purification system, and if so, which one? In such everyday scenarios, we base our decisions on our – often implicit – commitments to science and pseudoscience. As Mahner puts it,
[these] questions are not just questions of public policy, but also ethical and legal ones, as they may involve fraud or even negligent homicide, for example, if a patient dies because he was treated with a quack remedy. Thus, a case can be made for the need to distinguish science from pseudoscience.
The demarcation problem is also of central importance in education, as demonstrated by the series of lawsuits, mainly in the United States, related to the teaching of creationism. Notably, proponents of “scientific creationism” and “intelligent design” have sought to legitimize their views as scientific because scientific theories, unlike religious ones, can be taught in public schools (see Section 2).
It is perhaps not surprising then that after many years of neglect, the demarcation problem has gained new momentum, with more sophisticated multi-criteria approaches to demarcation being developed. Perhaps the most significant milestone in this process has been Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, edited by Massimo Pigliucci and Maarten Boudry (Reference Boudry, Pigliucci and Boudry2013). This volume, produced in collaboration with philosophers of science, historians of science, sociologists of science, and practicing scientists, contains several important contributions.
4.1 Science as a Cluster Concept: Pigliucci’s Response to Laudan
As discussed in Section 3, Laudan believes that since philosophy breaks down on demarcation, it is fundamentally wrong to seek any non-sociological demarcation criterion separating science from pseudoscience; and as descriptive approaches are not of much help for principled and reasoned decisions, they should also be abandoned. Pigliucci disagrees with Laudan, as he avoids looking for any general necessary and sufficient conditions in the first place because they inevitably simplify the diversity of scientific practices and pseudoscientific ideas (though this was also admitted by Laudan). He rejects Laudan’s argument that no principled and reasoned classification of science versus pseudoscience can be provided, particularly because he challenges Laudan’s insistence on establishing such classifications through necessary and sufficient conditions (a similar point was made by Walsh (Reference Walsh2009) too). This simplification, according to Pigliucci, risks ignoring the nuanced realities of how these practices evolve and operate within different contexts and disciplines – a complexity that Laudan himself has noted in his critiques. His critique extends beyond a mere disagreement with Laudan’s methodology; it encompasses a broader advocacy for a more flexible approach to understanding the distinctions and overlaps between various forms of science and pseudoscience. In his study, Pigliucci distinguishes four categories, emphasizing that this is only a preliminary approach that does not even come close to covering the full diversity of science and pseudoscience.
The first category is what Pigliucci calls established science. This category consists of the normal sciences in the Kuhnian sense, that is, fields with a more or less coherent theoretical and methodological framework on the one hand and a well-established institutional background on the other. These include, for example, experimental particle physics or evolutionary biology, even though the two have little in common in terms of their specific characteristics (object of research, epistemic situation, methods, etc.). By contrast, if one looks at the second category, that is, psychology or economics, these soft sciences do not have the same robust epistemic and methodological characteristics of their established counterparts; still, they are part of the institutional framework of science. The third group consists of research fields on the speculative borderlands of science: These are also part of institutional science but lack either the clear research methodologies or empirical results of the previous two. Examples include string theory, evolutionary psychology, or the search for extraterrestrial intelligence (SETI). These proto- or quasi-sciences are not yet established sciences, but they have the potential to be, though they may never get there. The last category is pseudoscience, including astrology, scientific creationism, flat-Earth theory, or AIDS denial. According to Pigliucci (Reference Pigliucci, Pigliucci and Boudry2013, 18–19), any attempt to draw a map of pseudoscience and their interfaces would yield a cluster diagram. The boundaries are blurred, and the transition from one category to another is gradual – but this does not imply that there are no distinguishable categories at all.
Against Laudan’s pessimistic conclusions, Pigliucci maintains that if both science and pseudoscience are heterogeneous, it is natural that there can be no global demarcation criteria, that is, criteria that are universally applicable to all fields of knowledge, as Laudan would require. Given the complexity of the notion of science, why should things designated as scientific necessarily have a list of common properties that would clearly separate them from nonscientific fields? Is it not instead the case that any two things or activities called science will share at least one common property, but that there is no single property common to all sciences? Ludwig Wittgenstein used the term “family resemblance” to describe the nature of concepts with blurred boundaries, an example of which is “play”:
Consider, for example, the activities that we call “games.” I mean board-games, card-games, ball-games, athletic games, and so on. What is common to them all? – Don’t say: “There must be something in common, or they would not be called ‘games’” – but look and see whether there is anything common to all. – For if you look at them, you will not see something that is common to all, but similarities, relationships […] I can think of no better expression to characterize these similarities than “family resemblances”; for the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and criss-cross in the same way. – And I shall say: “games” form a family.
According to Pigliucci, science is a similar cluster concept: It cannot be captured by a finite number of necessary and sufficient conditions because these will necessarily omit specific characteristics that some activities called science have.Footnote 30 It does not follow, therefore, that the problem of demarcation is meaningless, as Laudan claims, but it needs to be reframed. Instead of global criteria – such as those sought by Popper and his followers – applicable to all things science, local criteria for demarcation must be established.
What does this mean? Let’s take seriously the fact that science is an epistemically and methodologically heterogeneous enterprise, the nature of which can be captured by the notion of family resemblance. Obviously, the boundaries between science and pseudoscience will be drawn along different lines in different fields. To take this into account, let us imagine a coordinate system where the vertical axis represents “empirical knowledge” and the horizontal axis stands for “theoretical understanding” (Pigliucci Reference Pigliucci, Pigliucci and Boudry2013, 22–23). In this coordinate system, the more extensive the empirical content of a field, the higher its position on the vertical axis, and the more it is able to place the phenomena under study in a systematic theoretical framework, the more it moves to the right along the horizontal axis.
Looking at examples of the four categories earlier, the following can be observed: (1) Evolutionary biology is placed at the upper right of the coordinate system, since it has a wealth of empirical material and evidence and allows for a comprehensive, consensual, and systematic theoretical understanding of the subject. (2) Soft sciences such as psychology would be placed in the upper-left corner, since they are empirically rich and have a wealth of evidence but lack a conceptual and theoretical framework that covers all subfields and approaches and is accepted by all practitioners. In other words, if someone were to reject evolutionary theory, they would be ostracized from the scientific community of biologists, whereas psychology does not have a specific, unquestioned, and all-encompassing paradigm, thus leaving greater room for fundamental rejection and skeptical scrutiny. Meanwhile, string theory as a (3) protoscience would be placed at the bottom right of the coordinate system, since it has a high level of theoretical understanding of the phenomena it studies and treats them in an eminently mathematical way, although the hypothetical strings or superstrings themselves can hardly be experienced or tested by empirical means. Lastly, paradigmatic (4) pseudoscientific ideas, such as astrology or creationism, are in the bottom-left corner, since they excel neither in terms of their empirical content nor in their theoretical understanding; they are unable to confirm their ideas with new, independent evidence, to make more accurate predictions, or to do any of the other things that are typically expected from the transversely positioned established sciences, for example.
Of course, each field can be placed on a larger area of the coordinate system, and the boundaries between them are anything but sharp. Instead of global demarcation, it is therefore more appropriate to speak of local demarcation, and to classify each pseudoscientific field on a case-by-case basis, rather than using a general notion of pseudoscience. Accordingly, we should talk about certain fringe concepts in the plural; ufology, cryptozoology, parapsychology, and astrology have nothing in common by definition. By focusing on the possibility of local demarcation criteria, Pigliucci paints a much more optimistic picture than Laudan. In the end, it seems that there is no graspable essence of science that philosophy can detect by means of conceptual analysis, and it is worth exploring whether other lines of inquiry might not yield more useful results.
4.2 Varieties of Pseudoscience
Given the diversity of science, certain approaches employing multiple demarcation criteria stand out, such as those proposed by Mario Bunge (Reference Bunge1983) and Paul Thagard (Reference Thagard1978, Reference Thagard1988). They in turn influenced Martin Mahner’s (Reference Mahner and Kuipers2007, Reference Mahner, Pigliucci and Boudry2013) own multi-criteria approach, which will be examined in more detail in this section.
In a paper on astrology, Thagard (Reference Thagard1978, 227) has written that “a demarcation criterion requires a matrix of three elements: [theory, community, historical context].” As theory refers to the content and structure of the ideas themselves, a scientific theory should be coherent, systematically organized, and capable of making testable predictions. Theories must also be dynamic, adapting, and evolving in response to new data and discoveries, rather than remaining static. The community criterion examines whether a theory is supported, scrutinized, and debated by the appropriate community. This involves considering the peer review process and the acceptance of the theory among qualified experts. Community consensus is critical, as it reflects collective validation based on shared methodologies and empirical standards. Finally, historical context involves understanding how the theory has evolved over time in response to the changing technological and methodological landscape. The historical trajectory of a theory can often illuminate its resilience and capacity to incorporate or rebut emerging evidence and critiques. By focusing on these three elements, Thagard emphasizes that the evaluation of science versus pseudoscience is not merely an abstract theoretical exercise but also a practical assessment grounded in societal and historical realities.
Thagard (Reference Thagard1988, 170) later developed this concept further, identifying five criteria for science and, symmetrically, five for pseudoscience (see Table 1).
Table 1 Thagard’s list of characteristics
Science | Pseudoscience |
---|---|
Uses correlation thinking | Based on resemblance thinking |
Seeks empirical evidence | Ignores empirical evidence |
Evaluates alternative theories | Unaware of alternative theories |
Prefers simple, consilient theories | Prefers complex, ad hoc theories |
Progresses by developing new theories | Stagnant in progress and application |
As can be seen, the typical forms of science and pseudoscience in this list are symmetrically opposed to one another. However, these characteristics mainly refer to their theoretical features. According to Thagard, resemblance-based reasoning assumes causality from similarity, whereas correlation-based reasoning infers causality from the co-occurrence of events. Using the example of astrology, he argues that pseudoscience often draws conclusions based on various resemblances, for example, “the reddish cast of the planet Mars leads to its association with blood, war, and aggression” (Thagard Reference Thagard1988, 163), with astrologers viewing this kind of resemblances as indicators of causal influence (in this case, between celestial bodies and personality types or earthly events).
This fundamental difference between science and pseudoscience greatly influences their respective theory structures, relation to empirical evidence, and alternative explanations. The last criterion is progressiveness, a historical feature of science, which is important, as Thagard stresses, because it highlights pseudoscience’s temporal and contextual nature. For historical figures such as Kepler, the study of astrology may have seemed reasonable if they attempted to base it on patterns of correlation (Boner Reference Boner2013), but nowadays, in contrast to various psychological and biological theories about personality traits, it is pseudoscientific. Thagard emphasizes that these are not strictly necessary and sufficient conditions, but the two lists can be used to determine whether a concept, considering its “conceptual profile,” is scientific or pseudoscientific. Contrary to the classical single-criterion demarcationism of Popper, this represents a much weaker multi-criteria approach, as it does not presume a static, time-bound concept of science and pseudoscience or any fixed boundaries between them.
From this viewpoint, Laudan and proponents of the multi-criteria demarcation approach find common ground, with their differences being rather metaphilosophical. Laudan claims that due to the historical-sociological nature of science, any attempt of demarcation is doomed to failure, since local contingencies dilute the concepts of science and pseudoscience to such an extent that one is better off entirely discarding these terms from our lexicon. On the other hand, advocates of multi-criteria demarcational approaches generally contend that while demarcation necessarily applies looser, local criteria, this does not justify abandoning the practical problem of demarcation altogether.Footnote 31
In addressing the mode of demarcation, Mahner builds on Thagard’s concepts and advocates for considering entire domains or fields of knowledge, or epistemic fields, as the proper units for demarcation: “an epistemic field is a group of people and their practices, aiming at gaining knowledge of some sort” (Mahner Reference Mahner and Kuipers2007, 523). In this sense, an epistemic field is purely a descriptive analytical unit, meaning that it is not the existence of its object of study that makes something a field of knowledge; for example, even if God does not exist, theology is an epistemic field (Mahner Reference Mahner and Kuipers2007, 523), and even if there are no psi phenomena, so is parapsychology. Furthermore, epistemic fields are “structured hierarchically.” This means that knowledge domains are organized in a layered manner, where some fields are more general and encompass others that are more specific. Biology, ecology, and behavioral ecology are thus all epistemic fields; biology is the most general and encompasses ecology, which can be further divided into smaller units, such as behavioral ecology, population ecology, or community ecology.
According to Mahner, choosing epistemic fields as the basic units of demarcation enables us to account for science’s heterogeneity and temporal dimension. However, in light of this, expecting that any specific characteristic could capture the richness of epistemic fields with their diverse features and hierarchical levels would be unrealistic:
Referring to entire fields of knowledge and to a more comprehensive list of science indicators has the advantage of being able to cover all these aspects and their respective weights. Its disadvantage, however, is that it no longer allows us to formulate short and handy definitions of either science or pseudoscience.
What exactly are these science indicators? Mahner (Reference Mahner and Kuipers2007) adopts Bunge’s «C, S, D, G, F, B, P, K, A, M» list of ten elements that defines the family of research fields (Bunge Reference Bunge1983; see Table 2).
Table 2 Bunge’s ten components of science
Component | Description |
---|---|
C – Community | A research community with specialized training, strong information links and a tradition of inquiry |
S – Society | The society hosting the community supports or tolerates research activities, allowing research free from authority |
D – Domain | Deals with concrete entities (past, present, future), including elementary particles, living beings, human societies, or the universe |
G – General outlook or philosophical background | Includes ontological realism (independent outer world), naturalism (natural over supernatural) and principles of lawfulness and antecedence |
F – Formal background | Collection of up-to-date logical and mathematical theories used in the field |
B – Specific background | Up-to-date, well-confirmed data, hypotheses, theories, or methods borrowed from adjacent fields |
P – Problematics | Collection of epistemic questions about the nature and laws of objects in domain D |
K – Fund of knowledge | Growing collection of up-to-date, testable, and well-confirmed knowledge items compatible with background B |
A – Aims | Cognitive aims rooted in basic science, including discovery and use of laws, systematization of knowledge, and refinement of methods |
M – Methodics | Collection of empirical methods or techniques used for data collection or theory testing |
The list encompasses the criteria of being scientific: If any research field approximately satisfies these conditions, it is a scientific research field. Bunge claims that this list helps to identify where a specific field belongs:
Any research field that fails to satisfy even approximately all of the above [ten] conditions will be said to be nonscientific. A research field that satisfies them approximately may be called a semiscience or protoscience. And if, in addition, it is evolving towards the full compliance of them all, it may be called an emerging or developing science. On the other hand any field of knowledge that is nonscientific but is advertised and sold as scientific will be said to be pseudoscientific (or a fake or bogus science). The difference between science and protoscience is a matter of degree, that between science and pseudoscience is one of kind.
Some of these demarcation criteria, or “science indicators,” are normative, while others are descriptive, that is, they signify the scientific nature of a field without any presuppositions about its content of scientific knowledge. Mahner aims to develop an extended typology of epistemic fields, which, following these criteria, offers a more fine-grained division between science and pseudoscience. This typology does not solely cover what Mahner labels “factual sciences” (which belong to the core meaning of the term “science”) but also encompasses all areas that produce reliable knowledge, including the humanities, mathematics, and technology. Therefore, these are epistemic fields that, while not being sciences, still generate reliable knowledge:
The factual and formal sciences, the technologies, and the humanities are all research fields producing genuine knowledge, which on the whole is either (approximately) true or else useful, and contributes to the understanding of the world and its inhabitants. For this reason, one might argue that they should all be included in a broad conception of science. This is for example done in the German intellectual tradition, where the name of almost any field of knowledge is dignified by the ending “-wissenschaft” (-science), including the humanities, which are called Geisteswissenschaften (sciences of the mind). So there is bioscience alongside “music science,” just as there is computer science alongside “literature science.”
Also included among the areas that generate reliable knowledge are ordinary forms of knowledge and crafts, the so-called “technics,” though these fall outside the primary focus of inquiry within the philosophy of science. Therefore, the main dividing line in Mahner’s typology is not strictly between science and pseudoscience, but between fields that produce reliable knowledge and those that create illusory knowledge, merely giving the appearance of truth.Footnote 33 In the following, areas relevant to the scope of this volume will be highlighted.
The particular domain of nonsciences includes those unscientific fields that “explicitly or implicitly, pretend to do science” (Mahner Reference Mahner and Kuipers2007, 544), known as pseudosciences. According to Mahner’s distinction between reliable and illusory knowledge, these areas are nonscientific and produce illusory knowledge. However, a problem arises with this definition, namely, that the typical pseudoscientific fields, which generate unreliable knowledge, are not uniform in their relationship to science: While pseudosciences like parapsychology have fought – or rather used to fight – to be included within the boundaries of socially accepted and supported reliable knowledge-producing science, many pseudoscientific fields do not aspire to this. Fields such as magic, New Age, or tarot reading do not attempt to present themselves as scientific in their methods or goals. Many adherents of these fields either outright reject scientific knowledge and understanding or, at best, consider it incomplete, advocating for some form of holistic or mystical knowledge acquisition.
As Mahner (Reference Mahner and Kuipers2007, 548) points out, “the standard definition of a pseudoscience as a nonscientific field with scientific pretensions does not apply to such areas.”Footnote 34 He introduces the concept of parascience, which encompasses both the standard pseudosciences that claim the epistemic authority of science and the fields that operate in opposition to science. Beyond the mentioned examples, traditional Chinese medicine (TCM) is also considered a parascience. However, its status is not as clear-cut as Mahner suggests, considering that TCM frequently competes for the same resources as Western biomedicine and makes claims with similar demands to gain knowledge-status, sometimes supplemented by concepts from Western medicine.
4.3 The Multidimensionality of Science (and Pseudoscience)
Based on Mahner’s framework, one option would be to set a condition for scientific validity, for example, that any seven of these characteristics must be met. Theoretically, this would allow for a total of 176 possible ways to qualify as a science. At this point, two important considerations should be taken into account. As Mahner (Reference Mahner, Pigliucci and Boudry2013) notes, his list does not mean that there are only these ten criteria. Furthermore, not every combination may actually exist, either due to chance or because specific criteria often occur together. Establishing priority among the criteria and weighing them accordingly might also be worthwhile.
One of the earliest published attempts to apply a multi-criteria approach to demarcation is Fred J. Gruenberger’s 1964 Science article, “A Measure for Crackpots.” This approach shares similarities with the more contemporary criteria proposed by Martin Mahner, among others. In his pioneering work, Gruenberger introduced a point system to evaluate claims scientifically, quantitatively measuring various attributes that distinguish scientific from pseudoscientific work. Key attributes in his checklist include public verifiability, predictability, controlled experimentation, use of Occam’s razor, fruitfulness, and open-mindedness. Each attribute is assigned a specific weight, reflecting its importance in the scientific validation process. This structured and quantitative approach set a precedent for later frameworks like Mahner’s, which also aim to comprehensively and systematically evaluate the scientific validity of various disciplines and claims.
Fernandez-Beanato (Reference Fernandez-Beanato2020) argues for the need to identify clear cases, in both the realms of science and nonscience, to account effectively for the multidimensionality of science. The specific cutoff values that determine whether an epistemic field can be deemed scientific can be established based on the clear cases with the lowest values. This implies that a domain can be scientific or nonscientific in various ways, aligning with the actual heterogeneity and temporal dimension of science and pseudoscience. This approach acknowledges established sciences that may not fully or only partially meet certain epistemic and methodological criteria due to characteristics like their subject of study. Take, for instance, the historical sciences. A paleontologist cannot conduct experiments in the same way as a zoologist studying contemporary living creatures, thus only partially fulfilling the condition of reproducibility, which is important in many sciences. On the other hand, the school of parapsychology associated with Joseph Banks Rhine, flourishing in the 1930s and 1940s, endeavored to investigate various parapsychological phenomena (such as telepathy, clairvoyance, or psychokinesis) using strict experimental protocols, analyzing data with complex statistical methods, and publishing results using standardized scientific quality control mechanisms. By some criteria, academic parapsychology was considered scientific, but by others, it wasn’t because – in the Kuhnian sense of protoscience – it lacked the clearly defined problems needed for the puzzle-solving of normal science.
4.4 Types of Demarcation and the “Naturalization” of Pseudoscience
The multi-criteria approaches summarized here could offer an extension to Popper’s original solution to the demarcation problem, incorporating a broader perspective. However, it is important to note that some elements of this approach were already present in earlier works, such as those of Popper (see Section 5). Thus, multi-criteria approaches should be understood as a refinement of the demarcation problem rather than as a linear progression.
Firstly, as has been shown, introducing multiple demarcation criteria can address the theoretical and methodological heterogeneity of the various fields called science. It is not necessary for all sciences to resemble each other in every aspect, or to share one or a small number of characteristics. While the methods of two given fields might be entirely dissimilar because they focus on very different entities and phenomena, they can be considered sciences if they share several other criteria.
Secondly, this approach can handle the temporal dimension of science: There need not be any timeless criteria that represent the nature of science in an essentialist manner. These criteria can be time-indexed, which does not preclude the possibility of robust features of science being valid over a long period and across many fields. As such, philosophical analysis acknowledges a fundamental insight from the history of science, namely, that our current conceptions of science and scientific concepts cannot be projected onto the science of the past; a nuanced understanding of the historical context and the epistemological norms prevalent at the time is required, rather than imposing modern criteria directly.
Thirdly, borderline cases are less problematic for multi-criteria approaches. Within a multi-criteria framework, it would be entirely feasible to require that all criteria be satisfied for a discipline to be classified as scientific, thereby establishing a demarcation as definitive as one based on a single criterion. However, as we have seen, this perspective also allows for the possibility that not all criteria must be strictly met, offering a framework that can handle ambiguities more flexibly than approaches relying on a single criterion. In certain instances, fields that are typically considered pseudoscientific, like parapsychology or cryptozoology, may satisfy many of the criteria of being scientific; conversely, the borderline areas of accepted science, such as the search for extraterrestrial intelligence, are sometimes labeled as fringe science. This can be connected to the second point, namely the temporal dimension of science: A borderline field, as a protoscience, may transition into normal science through certain discoveries or theoretical developments.
It appears that multi-criteria approaches can address issues that classical single-criterion demarcation cannot, even if they do not establish the clear and sharp boundaries that some might desire. However, Maarten Boudry (Reference Boudry, Pigliucci and Boudry2013, Reference Boudry2022) claims that while multi-criteria approaches are valuable in many respects, they blur the distinction between pseudoscience’s central and secondary characteristics. He argues for distinguishing between two forms of demarcation: normative and territorial. What are these two demarcation problems? According to Boudry, in his early work, Popper understood demarcation in a fundamentally descriptive, classificatory way, aimed at differentiating between the empirical sciences and mathematical, logical, and metaphysical systems – this is the territorial aspect of the question. However, in Popper’s later writings, when he sought to draw a line between science, such as Einstein’s theory of relativity, and pseudoscience, like Freudian psychoanalysis or Marxism, the demarcation problem appeared in a normative sense, offering rational justification, and granting degrees of credence (Boudry Reference Boudry, Pigliucci and Boudry2013, 81) – this is a normative reading of the demarcation problem. Even if territorial boundaries cannot be established, this has no implications for the normative question of whether one can effectively draw boundaries in different practical contexts between epistemically reliable fields and fields producing unreliable, illusory knowledge. Boudry suggests that by separating the two demarcation problems and adopting a “naturalized” approach to pseudoscience, the second problem, that is, the question of normative demarcation, can be addressed. In this context, “naturalized” means shifting from abstract philosophical definitions to an empirical analysis of pseudoscience by examining the observable behaviors, practices, and defense strategies employed by pseudoscientific fields. This approach emphasizes practical, real-world characteristics rather than theoretical boundaries.
What does the naturalization of the concept of pseudoscience entail? As discussed in the introductory section, pseudoscience can be placed on a continuum. Michael Gordin (Reference Gordin2023, 102–104) emphasizes that pseudoscience exists in the shadow of science: Where there is a pseudoscience, there must be a science to which it relates and in relation to which it can actually be discussed. “Pseudoscience,” as a relational term, only has a meaning in relation to science (Hecht Reference Hecht, Kaufman and Kaufman2018). Building on Sven Ove Hansson’s definition, one can speak of a kind of cultural mimicry in the case of pseudoscience: (1) It is broadly related to a question arising in the field of science, (2) it is an epistemically unwarranted field, (3) and it attempts to create the impression that it is epistemically warranted (Hansson Reference Hansson2009, Reference Hansson, Pigliucci and Boudry2013). According to Boudry, regardless of whether one has a general concept of epistemic warrant, one can identify the characteristics that pseudoscience uses to create the impression of epistemic legitimacy. These include immunizing strategies and epistemic defense mechanisms that protect the idea against contradictory evidence and critical scrutiny.Footnote 35
Boudry’s concept, therefore, does not directly focus on the epistemic characteristics of pseudoscience, but indirectly on how advocates of a pseudoscientific idea handle these epistemic defects. “While there are myriad ways in which a theory can fail to be epistemically warranted, there are comparatively fewer ways to create a false impression of epistemic warrant, and these ways are largely similar across different fields of inquiry” (Boudry Reference Boudry2022, 96). These methods, such as misusing scientific terminology, appealing to authority, and claiming persecution, are designed to mimic the appearance of scientific credibility by appealing to the general expectations of what constitutes legitimate scientific inquiry, rather than engaging with the substantive methodological and empirical frameworks specific to each discipline. Boudry’s analysis suggests that while the content and context of pseudoscientific theories can vary widely, the core strategies employed to establish false legitimacy are significantly similar, exploiting the broader characteristics of science, rather than the nuances of particular scientific fields. Even though philosophers of science do not possess an essentialist definition of science, a naturalized approach allows for a description of pseudoscience that outlines its central characteristics.
Therefore, Boudry does not reject multi-criteria approaches; rather, he approaches the problem of pseudoscience from a different direction. The task for the philosopher of science is not to delineate pseudoscience from science – no matter how numerous or weighted the criteria may be – and thereby solve the problem of demarcation. Instead, the focus should be on uncovering the common characteristics of particular cases considered pseudoscientific. Boudry implicitly relies on a form of normative basis by focusing on how pseudoscientific proponents act to evade criticism and create the facade of legitimacy. His approach does not reject all normative criteria but refines them by prioritizing empirical observations of strategies rather than starting from rigid, abstract definitions of epistemic warrant. The normative problem of demarcation ultimately dissolves in this naturalistic approach, which can empirically distinguish between primary and secondary properties. Boudry’s naturalization program for the demarcation problem in the philosophy of science seeks to shift the discourse from a purely philosophical and normative framework to a more empirical and practical approach. This involves analyzing the actual behavior, practices, and strategies of both science and pseudoscience within real-world contexts, rather than relying solely on abstract philosophical criteria to distinguish between them. According to Boudry, through such an empirical focus, a naturalistic analysis can identify the “primary,” or essential, characteristics of pseudoscience, the universal tactics used across pseudoscientific fields to falsely present themselves as legitimate and distinguish these from incidental, “secondary” properties, meaning the specific, context-dependent traits that may vary among pseudosciences but are not necessary for something to be identified as such. The distinction between primary and secondary characteristics is that primary ones (like evasive tactics) are widely applicable to all pseudosciences, while secondary characteristics are more contextual or incidental, varying from one pseudoscience to another without affecting the field’s core pseudoscientific nature.
4.5 Gradations, Scales, and Degrees
The systematic attempt to come up with and evaluate multiple criteria is relatively new; it emerged in the 1970s, though had its origins in the 1950s (see Langmuir (Reference Langmuir and Hall1989), originally delivered in 1953; Fasce Reference Fasce2017). If we allow that there are multiple criteria, and that a given theory or belief purporting to be scientific may have some variation as to which criteria it meets, this will make it also more explicit that we are dealing with a highly variable and permanently changing problem, with possible transitory stages before an idea descends to the level of pseudoscience. This also illustrates that not every theory or belief that is dismissed from science immediately and automatically becomes pseudoscientific. Recently, some people have therefore started to talk about scales or a continuum.
In the practical context, David B. Resnik and Kevin C. Elliott (Reference Resnik and Elliott2023) have argued that a strict demarcation (between illegitimate and legitimate value influences; see Section 5) misses very important aspects of how science is done and used in policymaking, courtrooms, and laboratories. For them, there is no strict line dividing good from bad science, but a scale on which each theory and practice must be situated individually. “In practical contexts,” they write,
the important question is often “is this experimental finding, analysis, model, or study good enough science?” In a court of law, for example […], a judge must decide whether a research study performed by a professional scientist is good enough (e.g., unbiased, supported by data, reproducible) to admit into evidence. In drug regulation […], a government advisory committee must decide whether a study published in a scientific journal is good enough to use in deciding whether to approve a drug for marketing.
A very narrow pursuit of the ultimate criterion would either run into serious difficulties or require ad hoc adjustments to account rationally for the usability of particular research in a practical context (like a court of law), rather than a more theoretical situation. According to Resnik and Elliott (Reference Resnik and Elliott2023, 264), “Gradation poses problems for using necessary and sufficient conditions to define science, because questions about gradation call for nuanced answers based on degrees of conformity to certain standards.”
This is not a new thing. Most recently the idea originated with Resnik (Reference Resnik2000) himself who developed a so-called pragmatist approach to demarcation; it is very much in line with the multi-criteria theories, since it argues that the science–pseudoscience (or nonscience) distinction “depends, in part, on specific practical concerns” and is utilized in the “context of making practical decisions and choices” (Reference Resnik2000, 258). But already in a lecture delivered in 1973 – that was unpublished for decades –, Lakatos tried to repose and reformulate the aims and dimensions of the demarcation problem:
The demarcation problem may be formulated in the following terms: what distinguishes science from pseudoscience? This is an extreme way of putting it, since the more general problem, called the Generalized Demarcation Problem, is really the problem of the appraisal of scientific theories, and attempts to answer the question: when is one theory better than another? We are, naturally, assuming a continuous scale whereby the value zero corresponds to a pseudo-scientific theory and positive values to theories considered scientific in a higher or lesser degree.
Lakatos’ scale was preceded by Popper’s continuum, who spoke about different levels of testedness and potential testability. In Conjecture and Refutations, he wrote that there are “degrees of testability”; that is, certain theories have more precise formulations, and thus more precise numerical predictions can be deduced from them. “This indicates,” says Popper, “that the criterion of demarcation cannot be an absolutely sharp one but will itself have degrees. There will be well-testable theories, hardly testable theories, and non-testable theories,” where the latter “may be described as metaphysical” (Reference Popper1963/Reference Popper1969, 257).Footnote 36
Multi-criteria approaches can also highlight the mixed categories that may exist between clear cases of science and pseudoscience. Thus, these two are not binary, but constitute a diverse terrain that involves various gradations and overlaps. These diverse categories underscore the complexity of evaluating scientific claims and theories. At this point, we only want to present these categories very briefly; a detailed analysis and a comprehensive categorization are beyond the scope of this Element.
Junk science is characterized by corruption and bias, as it is often manipulated to support specific political or social agendas (Agin Reference Agin2006).Footnote 37 This form of science selectively presents data and scientific results to support a predetermined conclusion, commonly seen in public policy and legal contexts. Typical examples of junk science include the elaborate and premeditated activities of the tobacco and fossil fuel lobbies, explicitly designed to deceive and baffle all involved (Oreskes and Conway Reference Oreskes and Conway2011). Bad science refers to misapplications of scientific methods and techniques, where errors in methodology or interpretation led to unreliable and sometimes misleading conclusions. This category can include well-intentioned, but poorly executed, studies that fail to adhere to rigorous scientific standards.
Fraud science involves deliberate distortions and is often associated with nonreplicable research that is intentionally misleading. This category includes studies where data may be fabricated or manipulated to achieve desired outcomes, frequently in the service of specific financial or personal gains. Ben Goldacre (Reference Goldacre2009, Reference Goldacre2012) has documented, in a detailed and comprehensive account, all the fraud and distortion taking place in the medical sciences and on their fringes, including complementary and alternative medicine (CAM).
Fringe science, a term often used to describe theories at the margins of accepted scientific paradigms, has been a subject of interest among scholars and sociologists since the 1970s. They have explored how these theories are constructed and gain acceptance, if not consensus, within specific communities, despite their deviation from mainstream scientific norms (Collins, Bartlett, and Reyes-Galindo Reference Collins, Bartlett and Reyes-Galindo2017). While sometimes leading to significant breakthroughs, fringe science can also veer into less empirically grounded territory, making it a fertile ground for both innovation and controversy. Such work often challenges orthodoxy, stimulating new research or even the establishment of entirely new fields.
This dynamic is illustrated by the concept of pathological science, introduced by Irving Langmuir in 1953, where subjective biases and wishful thinking mislead scientists into believing in nonexistent phenomena. In pathological science, says Langmuir (Reference Langmuir and Hall1989, 43), “there is no dishonesty involved.” However, people are somewhat “tricked into false results” due to a lack of a deep and comprehensive understanding of the ways one can be “led astray by subjective effects, wishful thinking or threshold interactions.”
Heretical science, a term coined by space scientist Peter Sturrock (Reference Sturrock1988), represents theories that defy established scientific norms, potentially leading to significant breakthroughs despite facing substantial skepticism and resistance. He argues that some revolutionary ideas are supported outside the scientific community, posing intellectual and significant sociopolitical challenges; widespread skepticism could destroy entire research fields, diminishing funding, prestige, and the ability to influence theoretical and policy debates (Sturrock Reference Sturrock1988, 108). Robert Park’s (Reference Park2000) concept of voodoo science blends junk and fringe science characteristics, where scientific rhetoric is employed without adhering to the methodological and ethical standards expected in conventional science. It often involves claims that are not testable by scientific methods or repeatedly fail such tests.
It is important to emphasize that these terms not only have different meanings but also differ on a meta-level. Often, as in the case of heretical and voodoo science, practicing scientists introduce these terms, usually based on their own field’s experiences, typically in physics and medicine. In a stricter interpretation, these can be read as “typical scientist/skeptic rationalist accounts of mistaken and irrational science and of pseudoscience” (Collins, Bartlett, and Reyes-Galindo Reference Collins, Bartlett and Reyes-Galindo2017, 413), presented in a fundamentally naive way that lacks serious sociological or philosophical analysis.
That said, it was not our aim to organize these partially overlapping and partially incompatible concepts; we merely wanted to highlight the terminological diversity and present a possible rendering of such a scale or continuum that resurfaces in the literature – that deals with practical contexts as well – from time to time. But speaking of the practical context, the old demarcation problem might be usefully transformed and aligned a bit with recent so-called “new demarcation problem” of values in science.
5 Demarcation through Values
Popper is known not just for his falsificationist proposal about demarcation, but also for his disagreement with and criticism of logical positivism. In our context, one of the most important points of departure is Popper’s attitude toward metaphysics. Most members of the Vienna Circle identified their principle of demarcation (i.e., verification) with a criterion of meaningfulness, and when they rejected metaphysics as non-verifiable, they thus also declared it to be strictly and literally meaningless.Footnote 38 Popper (Reference Popper1959/2002, 16) disagreed, because after demarcating metaphysics from science, he did not want to “assert that metaphysics has no value for empirical science,” but rather that metaphysical theories contribute in various ways to the development of science. They can provide problems for scientists to work on, aid research, or keep old problems on the surface until they reach the level of empirical integration. “I am inclined to think,” he wrote, “that scientific discovery is impossible without faith in ideas which are of a purely speculative kind, and sometimes even quite hazy; a faith which is completely unwarranted from the point of view of science, and which, to that extent, is ‘metaphysical’” (Popper Reference Popper1959/2002, 16). He formulated later this point with the following examples:
[M]ost of our scientific theories originate in myths. The Copernican system, for example, was inspired by a Neo-Platonic worship of the light of the Sun who had to occupy the “centre” because of his nobility. This indicates how myths may develop testable components. They may, in the course of discussion, become fruitful and important for science.
Further examples included atomism and the corpuscular theory of light (Reference Popper1959/2002, 277–278). Because of this, such metaphysical ideas were not “non-sensical gibberish” (Reference Popper1963/Reference Popper1969, 257) but played important roles.
What happened here is basically that Popper identified some epistemic and non-epistemic functions of metaphysical theories. Since the 1950s, he talked about “pseudoscience” instead of “metaphysics,” which could suggest that pseudoscientific theories also have some form of epistemic function (they could aid, direct, and even organize actual empirical research), and some kind of non-epistemic role as well (they might motivate research and solidify moral, religious, and ideological motives that could be transformed in time). Although Popper is interpreted as dividing things into science / metaphysics / and pseudoscience, science being accepted, pseudoscience rejected, and metaphysics quarantined as the time being, he also admitted that even such a metaphysical theory as psychoanalysis could have important usage.Footnote 39 “I personally do not doubt,” wrote Popper (Reference Popper1957/1969, 37) about the theories of Freud and the psychologist Alfred Adler, “that much of what they say is of considerable importance, and may well play its part one day in a psychological science which is testable.”Footnote 40 Furthermore, “being scientific” and “being true” are not coextensive terms. Popper (Reference Popper1957/1969, 33) added that “science often errs, and that pseudo-science may happen to stumble on the truth” and psychoanalysis is “an interesting psychological metaphysics (and no doubt there is some truth in it, as there is so often in metaphysical ideas,” Reference Popper and Schilpp1974b, 985). Even if something cannot be falsified, one cannot exclude the possibility that it is still true for some reason, and thus, we cannot declare it to be meaningless. Even further, such theories could have practical applicability and relevance.
This is a remarkable point, and Popper (Reference Popper1963/1969, 258) was “forced to stress” it repeatedly, as his position was often taken to be a sharp theory of meaning that rejects all metaphysical and pseudoscientific theories as meaningless and thus entirely useless. “I thus felt that if a theory is found to be non-scientific, or ‘metaphysical’ (as we might say), it is not thereby found to be unimportant, or insignificant, or ‘meaningless’, or ‘nonsensical’” (Popper Reference Popper1957/1969, 38). In fact, Popper’s suggestion about the different roles of metaphysics/pseudoscience converges nicely with some recent developments within the “science and values” discourse that relate to the demarcation problem as well. As discussed in the previous sections, the complexity and the historical and sociocultural embeddedness of science necessitate the weighing of multiple criteria. Many people also argue that temporarily persistent boundaries can only be drawn locally and on a case-by-case basis, while keeping an eye on the cultural and social context from which ideas emerge in the first place. Since values are an organic part of local cultures, one might be tempted to take values into account as well when it comes to demarcation.
5.1 Motivating the Introduction of Values
In fact, values have been around in the literature for a long time. The reader has surely come across the idea (defended by scientists and some philosophers as well) that science ought to be value-free, that it is independent of so-called non-epistemic values, that is, moral, political, social, and religious values, things that come from the public and not from the internal logic of science. Furthermore, it is precisely this independence from values that is supposed to ensure that science is able to focus on truth, and only the truth, in an objective and neutral way, without letting politics and other such contingent worldly matters invade its territory. Pseudoscientific theories, on the other hand, are anything but value-free – they are biased, non-neutral, and their “data” are shaped by nonscientific concerns that include deception, fraud, unjust enrichment, a heavily emotional take on the world, and strong financial interests. Pseudoscientific ideas diverge from the sciences exactly at those points where they leave the value-free path, due to some political, financial, economic, or any subjectively biased reason.
But could science really escape the influence of values? Numerous philosophers and scholars argue that this is not the case, and that in principle good science cannot be separated from bad science, or from pseudoscience, purely by referencing the value-free ideal. This section will thus look at the “new demarcation problem” in the current literature on science and values. Though this question was originally meant to demarcate legitimate and illegitimate value influences, it can easily be related to the problem of science and pseudoscience (see further).
5.2 Value-laden Science
The “science and values” literature is one of the most complex and debated areas in contemporary philosophy of science, and includes considerations related to ethics, philosophy of law, race, feminism, epistemology, history, sociology, cultural studies, science policy, and philosophy in general. But to continue to focus on the problem of demarcation, only some basic issues and approaches will be discussed at this point (for further reading, see Douglas Reference Douglas2009; Elliott and Steel Reference Elliott and Steel2019; Elliott Reference Elliott2022).
The question nowadays is not really whether science is value-free; there is almost a consensus among participants of the debate that it isn’t value-free, and some even argue that it shouldn’t be in the first place (Douglas Reference Douglas2009). It is customary to differentiate between two large groups of values, scientific and nonscientific – or, in other words, epistemic values that lead to knowledge and truth, and non-epistemic values. There is no agreement among philosophers and scientists as to exactly how values are divided between these categories, or how sharp the boundaries are. One would not be going too far to claim that the scientific, epistemic values include precision, systematicity, and predictive and explanatory power, and perhaps even simplicity (values that relate to the method of science; see Kuhn (Reference Kuhn1977) and McMullin (Reference McMullin1982); for a recent, more detailed elaboration, see Schindler Reference Schindler2018). Accordingly, epistemic values like systematicity, explanatory power, precision, and testability are said to be indicators of truth, or to contribute to the attainment of truth (see Steel Reference Steel2010).
Although there is much argument among scientists about the significance of individual epistemic values and how to measure and weigh them against each other in conflicting cases, these values can usually be separated from such non-epistemic ones, such as economic development, the public good, public health, environmental protection, social justice, the “good life,” the existence of God, or any other political value. According to the defenders of the value-free ideal of science, scientists should simply ignore non-epistemic values. They should choose that procedure or theory that brings them closer to the truth, for example, that which has greater predictive power, and not the one that better protects the environment. Even if one admits that scientists are occasionally influenced by non-epistemic values, these are clearly erroneous cases when the scientificity of science is damaged. By sticking to some basic principles and procedures (defined by the scientific method), scientists could avoid the harmful and misleading influence of non-epistemic values and follow the path marked by pure facts, data, and rational methods.
Some people have argued, however, that non-epistemic values should play a role (Douglas Reference Douglas2009; Elliott Reference Elliott2022) to make science responsible and relevant to public tasks and concerns. Consider the interpretation of collected data (e.g., the principles of data selection and the standards of admitting/rejecting sources of data), the choice of methods (e.g., opting for genetically modifying organisms or for more ecological and traditional forms of agriculture), the design of experiments (e.g., incorporating ethical considerations about animals and humans or deciding whether to use 3D printed tools), or the decision-making process before any action is taken (e.g., when to stop a trial and release a vaccine). All these elements of science present substantial cases where non-epistemic values (of a moral, economic, comfort-related, social, financial, or political nature) could, do, or should affect a given decision. Just think about the safety of vaccines or the theoretical question whether human life is possible on Neptune. As Douglas (Reference Douglas2000) and many others argue, we need significantly more evidence to accept that a given vaccine is safe, than to claim that parts of Neptune’s environment is conducive to human life, simply because the risk of a false positive (asserting that the vaccine is safe, while in fact it is not), is more damaging, than falsely saying the Neptune is inhabitable (while it is not). To state the range and level of evidence to draw the conclusion is not determinable purely by cognitive arguments and calls for ethical and other values since the worse the possible consequence (of falsely accepting a theory), the higher the standard and level of acceptance shall be (for multiple examples and case studies, see also Elliott and Richards Reference Elliott and Richards2017).
Although there is no consensus among philosophers of science regarding each of these examples, most agree that the value-free image of science in its original and extreme form is untenable. As Douglas (Reference Douglas2009) has shown, it was an idea created at a specific point in the history of science, which has ever since been defended by many to delegitimize social concerns and values that are essentially parts of scientific practice.Footnote 41
5.3 The New Demarcation Problem
The question remains, however, how the character and range of these value-influences can be assessed. In 2022, Bennett Holman and Torsten Wilholt published an influential paper titled “The New Demarcation Problem.” The authors note that many people are confused, not knowing how to proceed and what to do with the science and values discourse, which has seemingly been settled in favor of those advocating for the value-ladenness of science. However, according to Holman and Wilholt, some of the most challenging questions arise precisely at this point:
Most centrally, that some values must, at times, play some role, does not entail that anything goes. There remain clear cases of biased science that cross a line between the inevitable management of epistemic risk in the light of value judgements and an inadmissible distortion of inquiry.
Thus, the literature still owes us an answer as to how we can separate the wheat from the chaff, that is, how to tell whether a given case of a non-epistemic value-influence is legitimate or not. Is there a principled way to claim that this or that effect of values actually distorts the scientific machinery in such a way that it might damage the integrity of science? Or to put it into more normative and simplistic terms, the issue comes down to the general worry of how to differentiate between legitimate (good) and non-legitimate (bad) value-influences.
This issue of drawing the line, of marking the boundary, is the “new demarcation problem” to which Holman and Wilholt allude in the title of their paper. Although they do not explicitly refer to Popper or the demarcation of pseudoscience from science, this inevitably suggests the old demarcation problem, as is made evident by Resnik and Elliott (Reference Resnik and Elliott2023) and Koskinen and Rolin (Reference Koskinen and Rolin2022) who themselves explicitly compares and relates the two demarcation-problems. In fact, as we have seen in Section 4, both multi-criteria and pragmatist approaches of scales and grades suggest that instead of searching for a universal necessary and sufficient condition to distinguish science from pseudoscience, one shall settle for pointing at better and worse renditions of scientific practices, heavily influenced by practical concerns. Thus, if both science and pseudoscience are value-laden, we have to find our ways around different versions of value-influences and demarcate better and worse effects. Or to put it differently: How can scientists pursue research based on the socially determined values within science without ending up on the fringe doing junk, fraud, or voodoo science?
5.4 Assessing Values: Strategies in the Literature
Holman and Wilholt distinguish five strategies in the existing literature on the subject. The (1) axiological approach is based on values themselves; by identifying and previously agreeing on an appropriate set of values, any influence will be legitimate that is shaped by those values. For example, in the context of creationism, it is evident that religious and political values explicitly manifest in the arguments of its proponents (see Forrest and Gross Reference Barbara and Gross2007), although it is less evident when, how, and how justifiably one introduces such values with such a huge impact into the scientific edifice. Creationists frequently present their findings as scientifically valid, purely rational, and value-free results of critical thinking and remain silent about non-epistemic value-influences.
In a certain way, this strategy in itself does not seem to be truly helpful, though, given the difficulty of determining what counts as an “appropriate set of values” – which is already one particular reading of the original question. References to “the needs of society” (Kourany Reference Kourany, Carrier, Howard and Kourany2008, 95) or ideal discussion situations (Kitcher Reference Kitcher2001) provide a clue, but in many cases, especially those involving pseudoscience and areas where science tends to break down more often, “the needs of society” are difficult to determine. Frequently, for example in the case of vaccine hesitancy, people have legitimate needs and values (safety, uncertainty, mistrust), and the situation is actually reversed: Science must find a way to handle those values in an appropriate, but still pro-science way (Goldenberg Reference Goldenberg2021).
(2) Functionalist demarcation strategies look at the ways values influence science, arguing basically that certain modes are permissible, while others are not. The most well-known differentiation comes from Heather Douglas, who distinguishes between direct and indirect uses of values. Values play an indirect (and legitimate) role when they determine the strength of the evidence needed to accept a certain conclusion, whereas they play a direct (and illegitimate) role when they “act as reasons in themselves to accept a claim” (Douglas Reference Douglas2009, 96).
In creationism, religious values play an illegitimate role, as they are directly used as reasons for accepting propositions about the natural world, contradicting functionalist criteria. For example, young Earth creationism asserts that the Earth and all life forms were created approximately 6,000–10,000 years ago, based on literal interpretations of the Bible’s Genesis account. This age is derived by tracing biblical genealogies, not through geological or cosmological evidence, which points to a much older Earth. Here, the religious value of scriptural inerrancy – taking religious texts as definitive and literal accounts of natural history – serves as the primary justification for beliefs about the Earth’s age.
(3) There are also so-called “consequentialist demarcation strategies,” which aim to capture not the way values influence science, but the consequences of that influence. As Holman and Wilholt (Reference Holman and Wilholt2022, 213) admit, it is often hard to see the difference between consequentialist and axiological approaches due to the entanglement of values and their consequences; but this strategy could still focus on whether a specific value and its influence do and don’t yield the required result. Some argue, for example, that these consequences should coincide with the democratically endorsed aims of scientific research. Thus, if certain values upset the democratic foundations of science, then those values exert an illegitimate influence on scientific practices. Yet again, this leads to a more basic discussion of how the search for truth and the democratic ideal constrain each other. Creationism is not merely a set of propositions about the natural world but also a political movement with substantial societal objectives. The leaked “Wedge Strategy” from the Discovery Institute’s Center for Science and Culture, an anti-modern, anti-secular, and anti-rationalist framework, rejects the intellectual and political paradigms that emerged during the Enlightenment. This political manifesto proposes a well-thought-out, three-step strategy to occupy and divert public education and science, challenging the structures responsible for knowledge creation in modern societies, as well as the proper functioning of democratic institutions.Footnote 42
The fourth strategy is called (4) “coordinative.” According to this view, we should look “at whether value-laden methodological choices are appropriately aligned with the expectations that are placed on them by others” (Holman and Wilholt Reference Holman and Wilholt2022, 213), because in many cases, the audiences’ expectations do not match the way values have influenced the research. Such breakdown of expectations includes cases when scientists violate or dissent from established conventions of risk management. However, to maintain trust in the collective of science (that no one will individually violate the rules in an ad hoc manner), scientists have to coordinate their values, expectations, and unconventional moves. While in this case, coordination is called for among scientists, one might also include the public. After all, members of the public have expectations and some knowledge of how science evolves, makes decisions, and produces knowledge; but as science is flexible in so many ways, with different contextually defined and evolving strategies, coordination is required between scientists and the public.
Put differently, science always has to balance the essential tension between individualist revolutionaries and collectivist traditionalists, and if scientists change camps, they have to provide good reasons to both the scientific community and the public. Creationism’s aim is often to foster an illusion among the public that there is a serious debate about the core idea of evolution, and that the theory of evolution is inadequately supported. Creationists attempt to amplify and reframe debates within evolutionary science and use out-of-context quotations from prominent scientists to support their views, thus creating an illusion of substantive debate among non-experts (see Section 2.2). There is a true “mismatch” (Holman and Wilholt Reference Holman and Wilholt2022, 213) between the actual research (biased, overinterpreted or farfetched) and what the audience takes it to be (a legitimate and warranted scientific dissent and debate).
The final approach is called (5) the “systemic strategy” because it focuses on the community, rather than on individual scientists and their individual projects. Holman and Wilholt (Reference Holman and Wilholt2022, 214) summarizes the view of social epistemologists as follows: “biases and distortions on the individual level are unavoidable, and that serious epistemic damage only arises at the social level, when the community as a whole lacks the structure to critically address them and to sustain the mechanisms of self-correction.” Creationism does not align with the scientific community’s standards and practices, and therefore fails to meet systemic criteria, as it lacks effective quality control mechanisms like peer review. Creationism often exemplifies this systematic failure when it mimics scientific formats – such as using scientific titles and participating in conferences – without truly engaging in the rigorous processes of science.
Even at first glance, it seems rather obvious that all these strategies are connected at various levels and via vague concepts (aims, consequences, expectations, and even values). Thus, until further investigations have been conducted, we should not expect that one strategy will be able to account for all the possible cases of value influence in the same measure. As Holman and Wilholt (Reference Holman and Wilholt2022, 214) themselves admit, these are not necessarily competing views (meaning that we don’t have to choose to work with just one of them) – in fact, “it may be the case that a satisfying solution combines more than one of these ideas.”
6 Debunking, Debating, and Attitudes
In the previous sections, we presented numerous arguments and considerations that all point to a somewhat disturbing picture of science; disturbing because of its consequences, namely that science and pseudoscience cannot be demarcated by appealing to a rigid, universal, single criterion. Science has always had many faces, goals, diverse local cultures, institutional weaknesses, and continuously changing values of practice. Therefore, even multi-criteria approaches struggle to grasp anything more permanent. All these different approaches face difficulties in finding principled reasons that could support rational choices on a normative level, beyond a sociological-descriptive perspective.
Consequently, in this last section, we will briefly take a look at another Popperian suggestion that moves the discussion from method to attitude; but first, we will review how “debunking” and “understanding” pseudoscience used to be differentiated.
6.1 The Cop, the Skeptic, and the Scientist
Once the philosophical community had recognized some of these difficulties in the 1980s, many of its members turned to different issues than the unsolvable question of demarcation. The field was thus often taken over by skeptical scientists, popular writers and all those who stuck to an imagined picture of science where it was supposedly possible to identify the realm of pseudoscience objectively, rationally, and neutrally in the name of the empirical method. Finding logical reasons, well-established data-driven arguments, and supposed holes and gaps in a specific theory were all common tools of those who aimed to “debunk” dangerous theories. The physicist and admitted skeptic Michael Friedlander, author of such a general debunking-oriented scientific work, was once described as someone who “plays science cop, patrolling and protecting its frontiers not just from pretenders to its authority over nature but from those outside science who mistakenly believe that they too have warrant to decide for themselves which science is good or bad, real or pseudo” (Gieryn Reference Gieryn1996, 768).
Playing cop in such situations is not atypical; in the 1970s, Paul Kurtz formed the Committee for the Scientific Investigation of Claims of the Paranormal, with the assistance and participation of such science luminaries as Carl Sagan, Martin Gardner, James “The Amazing” Randi, and Isaac Asimov (Gordin Reference Gordin2023, 73–74; Pigliucci Reference Pigliucci2023). They conducted or institutionally supported empirical investigations into the realm of parapsychology and penned countless articles and popular books about debunking fringe phenomena. Read as an acronym, the commission’s very name is “CSICOP,” which seems to be shorthand for “science cop.” However, as Michael Gordin (Reference Gordin2023, 74) has noted, this attempt at controlling and patrolling boundary issues “has not been a huge success.” Despite the popularity, fame and countless partial victories over individual charlatans, new ones kept popping up. Seemingly, hunting down fraudsters, cheaters, bad scientists, and junk entrepreneurs is a never-ending task, which is why “the willingness of high-profile scientists to engage in efforts like CSICOP has also diminished over the years” (Gordin Reference Gordin2023, 74).
Around the same time (also in the 1970s), the new trend of debunking argumentatively bad science to convince believers in the falseness and irrationality of its arguments was counterbalanced by the work of sociologists and historians. This new field, dubbed science studies, represented a different attitude: Not taking scientific rationality for granted, these scholars wanted to understand how theories were accepted and rejected; how and why false theories look rational and attractive initially; and how fringe and pseudoscientific theories can so easily attach themselves to plausible canons in various fields and win minds for their cause.
Though popular, scientific, and yet easily approachable, many of the cops’ volumes were not of an interpretative and understanding attitude and conveyed an oversimplistic picture and the nature of science and its relation to society and values. In reviewing Friedlander’s (Reference Friedlander1995) work, sociologist Thomas Gieryn (Reference Gieryn1996, 768) contrasted the debunking project with the growing sociology literature “on boundary work, which explores how contingent demarcations of science from nonscience are achieved and sustained amidst struggles for credibility, authority, and resources.” Friedlander seems to think that we are illogical, biased, and work under pressure, so mistakes prevail. When viewing things skeptically and acting purely on rational grounds, reason must prevail, and the cops are there to enforce it to counterbalance bias, irrationality, and any gaps in reasoning. The sociologist, like Gieryn, also thinks that we are illogical, biased, and act under various pressures – but that is all we have, there is no sanctioned, ideal paradise where all human shackles could be overthrown and reason reigns supreme. “One suspects that Friedlander wrote his book to teach the public something about how science works, so that they might better be able to judge possibly scientific claims for themselves,” as Gieryn notes with good reason. Nonetheless, using poor assumptions – about what is science and how it actually works – to promote understanding and education can be dangerous. “This worthy goal is vitiated by his condescending explanation that people are attracted to astrology, creation science, UFOs, or ESP because they are ignorant – that’s bad sociology” (Gieryn Reference Gieryn1996, 768).
If you think that science is a methodologically united, systematic, cognitive endeavor, then you obviously won’t be able to see all the nuances, daily struggles, boundary-work, and the many far-out innovations that – historically speaking – define science. It is not just that fringe or gray zones will be invisible to you, but you also won’t be able to understand why people believe in strange things and why they can’t be convinced of the “truth” after the flaws in their thinking have been pointed out to them.
6.2 Recognizing Identity as a Mean to Understanding
Recently, Helen De Cruz (Reference De Cruz, Baghramian and Martini2023) has argued, using various psychological studies and experiments, that science deniers are not motivated by purely cognitive or purely social and cultural concerns, but act against a mixture of both. Henceforth, various strategies are needed for various people, including a better message.
In many cases, the scientific content is still too complicated and abstract, but if there would be a “debiasing way” to frame the message, it could get through (her example is providing more visualizable and mechanistic (simple causal) explanations of climate change because they are more intuitive and understandable for many people). That is the cognitive, epistemic side. On the other hand, a better messenger is also needed: As Paul A. Offit has noted, scientists are usually not trained to be communicators or entertainers. De Cruz doesn’t say that scientists should communicate in a more indirect, engaging, funny or amusing way, but they should be more “benevolent,” and come from the audience’s in-group; for instance, “a political conservative who accepts anthropogenic climate change or an Evangelical Christian who defends evolutionary theory” (De Cruz Reference De Cruz, Baghramian and Martini2023, 12). If someone’s identity is at stake, or their sense of belonging to a group that protects them and provides values and meaning, then possibly the only way to change their mind is to show them a path that is compatible with the new information and their old identity.
Goldenberg (Reference Goldenberg2021, 59–65) discusses the posters of a 2014 Australian vaccination campaign, which depicts fathers and mothers making the following statements: “I use cloth nappies, I eat wholefoods, and I immunise,” “I use cloth nappies, I grow veggies, and I immunise,” “I breastfeed, I use homeopathy, and I immunise.” The campaign “notably focused on identity rather than vaccine facts to improve vaccine perception and behavior, invoking many of the community’s shared values and behaviors” (Goldenberg Reference Goldenberg2021, 59). This is radically different from just saying “I immunise and you must, too” because identities are preserved, belonging is maintained, and a sense of shared care is provided.
These improved messengers have a much higher chance to reach different segments of the public. Speaking in more general terms, the famous skeptic Michael Shermer has noted that
from my experience, (1) keep emotions out of the exchange, (2) discuss, don’t attack (no ad hominem or ad Hitlerum), (3) listen carefully and try to articulate the other position accurately, (4) show respect, (5) acknowledge that you understand why someone might hold that opinion, and (6) try to show how changing facts does not necessarily mean changing worldviews.
This is not the attitude of a fact-checker, but that of a cultural anthropologist, interpretive sociologist, or therapeutic psychologist. The idea is not to eliminate any discussion on facts or deny their importance – facts receive a lot of attention from many sides (see Leng and Leng (Reference Leng and Ivor Leng2020) for a recent discussion of this topic). But since “most beliefs are formed within a social context (and not based solely on facts) … shouldn’t social context matter in changing them?” asks Lee McIntyre (Reference McIntyre2021, xiii) in his book How to Talk to a Science Denier. To understand what motivates science deniers and pseudoscientists in forming and holding on to their beliefs, McIntyre went out into the field. He attended conferences of flat-Earthers, visited coal-mining villages prone to climate-change denial, talked with anti-vaxxers, and discussed the matter of GMO with many people involved firsthand. He found that science communication should move from the global (television, YouTube, blogs, etc.) to the local level. Accordingly, he argues that “Facts and evidence can matter, but they have to be presented by the right person in the right context” (McIntyre Reference McIntyre2021, 73). Sitting down for hours with an anti-vaxxer and addressing all the counterarguments and questions one by one might be one way to earn trust and change minds (for references, see McIntyre Reference McIntyre2021, 208, note 47).
Nonetheless, Helen De Cruz (Reference De Cruz, Baghramian and Martini2023, 13) has emphasized recently that “while it seems plausible that benevolent testifiers can change minds, they will tend to be the most swayable ones.” She goes on to point out that
It is more difficult to reach people who are not actively looking for information that might potentially challenge their ideas. Benevolent testifiers depend on the goodwill of their potential audience to give them a hearing, or would require the active participation of leaders giving them a platform within relevant religious or political groups.
The psychological literature recognizes numerous ways of how individuals (and groups) can cope with risks and threats; one of the latest additions is “motivated ignorance.” Motivated ignorance emerges as a strategy in cases when someone belongs to a group (having developed a social identity through it), and then becomes aware of possibly challenging information, but still chooses to ignore it. Motivated ignorance is quite familiar from everyday life, for instance, when we actively avoid possibly critical reviews of our favorite book or band; or ignore physical symptoms due to the possibility of something serious going on. A recent study of flat-Earthers from this perspective has shown that many of them (in online groups) engage in motivated ignorance by developing various defense mechanisms to keep possibly contradictory information at arm’s length. In this case, a shared social identity and group belonging outweigh processing critical information and arguments (Jones, Adams, and Mayoh Reference Jones, Adams and Mayoh2023). McIntyre (Reference McIntyre2021, 16) has noted the same, namely, that being a flat-Earther provides values, perspective, norms – in short, “an identity. It could give purpose to your life.”
As McIntyre (Reference McIntyre2021, 16) continues, “in order to change someone’s beliefs, you have to change their identity.” No matter how objective and truth-seeking science may be, as long as its products, implications, and consequences affect people, their concerns and beliefs cannot be ignored – and this is an extremely complex and intricate undertaking. To this end, scientists should also not turn a blind eye to the significance of human values. While these values, interests, fears, identities, wishes, goals, and everyday struggles are mostly studied by social scientists and psychologists, the philosophy of science should keep up and broaden its own perspective. If we cannot convince the most hardened pseudoscientists about their epistemic sins, we have to create a humane and caring environment in which they are not compelled to commit or otherwise attracted to those sins in the first place.
6.3 Vaccination, Identities, and the War on Science
Science communication and vaccination provide another example where non-epistemic values play an essential role in guiding the public around the science–pseudoscience nexus. In her recent book, Maya Goldenberg does not talk about hardcore vaccine-deniers or anti-vaxxers as it were: Their numbers, although growing, represent only a small percentage of society. Those who hesitate to have a vaccine injected provide the real challenge for the health system, with the number of unsure people reaching 25 percent in certain countries (Goldenberg Reference Goldenberg2021, 4). “Vaccine hesitancy” can be defined as having doubts about the effectiveness and timeliness of vaccinations, as a result of which children (often) are not vaccinated at all. To put it bluntly, many parents have doubts, worries, and fears – and therefore postpone and then forget about vaccinations. According to established medical and sociopolitical narratives, these doubters (and especially the more extreme anti-vaxxers), have waged a “war against science” and weakened its popularity and power, ultimately undermining its authority.
It is thought that one of the reasons for the war on science is ignorance. People don’t want to be vaccinated because they have misleading and false beliefs about vaccines – they don’t really know how science and vaccines work; if they knew, they would surely get vaccinated. The task, then, is simply to “enlighten,” to “teach” them, by disseminating knowledge, in line with the so-called deficit model (Miller Reference Miller1983). The central assumption of the model is that increasing public knowledge about the results of science will lead to greater appreciation of and trust in science. The hope was that by providing more information and enhancing public knowledge of (scientific) facts, people will be more inclined to support science and adopt evidence-based views. Rooted in a top-down approach, the model views the communication process as a one-way transfer of information from experts to a lay audience.
Based on the results of several empirical studies, Goldenberg concludes that for many, vaccination is not so much a scientific, but a personal, issue. We can easily imagine the following monologue: “Okay, I understand how vaccines work, I saw a video compiled by a well-known medical research team, and in general, vaccines work without serious complications, but how will they affect my child?” Everyone considers their own child to be special and unique in many ways, and parents know them better than anyone else: “My baby used to be sick often and she is still very weak – what if she is among those few percent who could have more serious side effects?” Parents are, quite understandably, concerned, very much relies on their own “lived experiences” (Halpern and Elliott Reference Elliott2022), and not as interested in the scientific consensus about a vaccine, or which peer-reviewed studies have been published in which journals (which are politically and economically rarely entirely independent). They value the safety of their children more than any alleged scientific consensus and are much more concerned with the unfortunate situation when their child becomes the rare exception than to acknowledge the low probability of possible outcomes based on unapproachable scientific calculations. Until doctors, communication professionals, nurses, healthcare providers, and science communicators can respond adequately to these concerns (beyond the usual “don’t worry, everything will be fine”), it is less likely that vaccine hesitancy will decrease.
An adequate response to these fears is hard to find. Paul A. Offit, for example, who co-invented the rotavirus vaccine, has written about numerous instances in which his words were distorted, and his arguments suppressed by emotions and personal fears and values have overwritten scientific data. In fact, he shows that it is essentially impossible to win a case by just following the science. He once got into hot water for saying that babies could take 10,000 vaccines at once because the MMR vaccine, for example, contains only 240 different epitopes, while newborns face trillions of bacteria right after birth. These numbers are almost incomparable, and “it probably would have been more accurate to say that we can respond to at least ten thousand” (Offit Reference Offit2018, 67, emphasis in the original). By following and making explicit pure scientific data and saying that children could take (at least) 10,000 vaccines at once obviously made him appear as a monster in the face of the concerned public. His lesson: “You are going to say things that, although scientifically accurate, you will regret. It’s unavoidable” (Offit Reference Offit2018, 67). Most scientists are not trained to communicate with the public or to handle, interpret, and understand their values emphatically. When asked whether a given vaccine is 100 percent safe, scientists cannot say “yes” because nothing in science is 100 percent certain. But if they opt for the scientifically accurate answer “no,” then they also lose: “You are not sure the vaccine is safe? Why would you recommend it to us? Who is paying you?” would run through the minds of many.
But even if scientists could avoid such communication pitfalls, people are often stubborn, biased, shortsighted, and not willing to face the truth; they are only human – all too human. When people who hesitated were told that vaccinations are safe in a harsh and authoritative tone, the result was not that they became pro-vaccine, but rather that their doubts were confirmed. The attempt at educating them backfired (Goldenberg Reference Goldenberg2021, 45); “there is no smoke without fire,” as the saying goes. They treated the evidence presented to them selectively, but this is by no means unique to vaccine doubters – we are all affected to some extent by the so-called confirmation bias, preferring data and opinions that confirm our own previously (though often unconsciously held) position. This is not because “we do not want to know the truth – many of us do – but because the truth is sometimes too threatening to our self-identities and the values we cherish” (Goldenberg Reference Goldenberg2021, 46). Even if we think that everyone should accept the objective and independent facts presented to us – which are established by the supposedly one and only, crystal-clear, centuries-old method of science – people simply do not work that way.
6.4 From Scientific Method to Attitude
Perhaps, then, we shall follow the recognitions of older philosophies and sociologies of science as well; there is no such thing as “the” method of “the one” science. Instead of such a method and a criterion defined through it, one could therefore “behaviorize” the field a little by talking about scientific attitude. One recent example of this is Lee McIntyre’s The Scientific Attitude: Defending Science from Denial, Fraud, and Pseudoscience (Reference McIntyre2019).
While McIntyre is in many ways a follower of Popper, he clearly rejects his flirtation with the demarcation criterion, which has been widely proposed (by Popper’s followers) as a necessary and sufficient condition for “being scientific” (see Sections 2 and 3). When the criterion is too tight or too loose, things that should be counted as scientific are either excluded, or the gates are opened to objectionable and suspicious activities. If the demarcation problem (as a way of providing a necessary and sufficient criterion for what counts as science) fails, then we should look for the solution elsewhere (McIntyre does not really engage with multi-criteria systems; for more on these, see Section 4). McIntyre suggests resorting to a “scientific attitude,” and although he does not offer a fully developed narrative and psychological theory about what attitudes are, how they evolve, what is within their range and what isn’t, he does provide a promising starting point for further research. He quotes, for example, the following passage from Popper (Reference Popper, Lakatos and Musgrave1968):
What characterizes the scientific approach is a highly critical attitude towards our theories rather than a formal criterion of refutability: only in the light of such a critical attitude and the corresponding critical methodological approach do “refutable” theories retain their refutability.
The idea is summed up by McIntyre in two Popperian commitments:
(1) We seriously care about and look after empirical evidence.
(2) We are willing to change our theories in light of new evidence.
Accordingly, scientists differ from their pseudo-scientific counterparts not in their methodological and abstract logical practices (pseudoscientists also use logic, statistics, results of other fields, etc.), but in what they actually care about. Empirical evidence is one such thing, but taking that evidence seriously is another. As long as they consider the evidence seriously (or perhaps, at all), they have a scientific attitude and are thus doing science. Take Popper’s example: The classic statement “All swans are white” could be refuted seemingly by pointing to a black swan. Although it is possible to utilize here stratagems such as transforming the seemingly empirical statement (“All swans are white”) into a definition of what does it mean to be a swan (one has to be white), and black swans thus define a new kind of bird, Popper thought that the statement is must be taken as refuted by all those who “accepts that there is at least one non-white swan” (Reference Popper and Schilpp1974b, 982–983).
One could object, though, that many practices deemed to be pseudoscientific par excellence collect what they consider to be empirical evidence and change their theories accordingly. In fact, many pseudosciences develop and adapt their theories over time, collect data to prove their points (a good example would be the story of how creationism formed and evolved, moving from interpretations of the Bible to the search for and processing of alleged fossil data (see Numbers Reference Numbers2006); or one could even mention homeopathy, whose founder, Samuel Hahnemann, collected lots of data and ran some of the first trials at the turn of the eighteenth and nineteenth centuries; see Coulter Reference Coulter and Salmon1984). While pseudoscientists reject much evidence brought to the table by scientists, and thus arguably fail criterion (1), scientists also often reject results that are offered and defended by a different laboratory or data analyzed by the defenders of a competing theory.
But McIntyre (Reference McIntyre2019, 65) does not think that this is a problem, and he argues that “one need not prove that anything with the scientific attitude is science; one need only show that anything without the scientific attitude is not.” Even though pseudoscientists often exhibit, mimic, or pretend to practice certain features of the sciences, it is enough to claim that if someone does not have the scientific attitude, she is not a scientist. That is, even if we cannot always demarcate science from pseudoscience, we might be in a much better position to reclaim scientific attitude from its abusers.
Secondly, it is not at all evident what empirical evidence is and how it should shape decisions and actions.Footnote 43 If empirical evidence were to speak for itself, no one would have any problem demarcating science, or even pursuing it – and the same goes for “following the data.”
There is a lot of philosophical work to do on the nature and scope of data and evidence (this is a task of the sciences, too, of course). McIntyre quotes the following passage approvingly:
The big difference Popper identifies between science and pseudo-science is a difference in attitude. While a pseudo-science is set up to look for evidence that supports its claims […] a science is set up to challenge its claims and look for evidence that might prove it false. In other words, pseudo-science seeks confirmations and science seeks falsifications.Footnote 44
This is indeed a difference in attitude: even though evidence may be contestable and needs interpretation, scientists and pseudo-scientists use it for different reasons and purposes. Since criticism, falsification, and seeking out problems to be solved works best if someone is not left alone with their pet theory, McIntyre (Reference McIntyre2019, 49) suggests that we should not be concerned with individual scientists, but with “the larger scientific community” who share the scientific attitude as “a guiding ethos.”
This move resonates well with two general atmospheres of contemporary philosophy. One concerns the role and importance of suitable science communication: as long as we are able to convey and communicate a humanized and engaging picture of science (Mason-Wilkes Reference Mason-Wilkes2023) with its essential scientific attitude to the public (thus helping people developing an attitude that respects and looks for evidence and raises critical perspectives with respect to norms and notions that were generationally handed over), demarcating science once and for all loses its momentousness. The other is a general social epistemological stance that takes into consideration communities instead of isolated perspectives. If there is a large enough community, then its general criticism and discussions will decide about the evidence and its connotations. As a scientist, and as a member of a caring public, you are never left alone with your evidence, concerns, and values.
A method is something that you follow or not, perhaps debating it, or even abusing it. But attitudes are more communal, at least regarding their formation, preservation, and controlled revision. There are perhaps hundreds of cases where someone has tried to turn the tools and commitments of science against it: people using statistics misleadingly or criticizing a well-established consensus just to make room for an alternative, pseudoscientific position (for instance, in the case of lobbyists and anti-vaccination campaigners) – or emphasizing repeatedly that scientists are also fraudulent, have political and social biases, and strong financial (conflicts of) interests to try and legitimize their own abuse of scientific values (recall the fear of the new demarcationists).
Although science has lost much credibility and trust for not caring enough about such cases – either by denying them or by declaring them to be individual exceptions to the rule – in the long run, and sometimes even at the height of problematic issues, science corrects itself. Posers get debunked, impostors are thrown out from the community, mistakes are repaired, and results are used in a new light. A somewhat positive reading is thus the possibility that abusers, charlatans, swindlers, and egotistical opportunists do end up making science better after all. As McIntyre says, the community will take care of such cases eventually.
Acknowledgments
We are much indebted to our reviewers for their numerous comments and critical notes. We have rewritten substantial parts of the volume, and their input has allowed us to improve the rigorousness and clarity of the text. The remaining errors are, of course, ours. We are also grateful to Michael Gordin, Alexandra Karakas, and Sebastian Lutz who read an earlier version and provided helpful comments. Our sincere thanks go to Jacob Stegenga who trusted and supported us during the writing and rewriting phase. Both of us were supported by the MTA Lendület Values and Science Research Group. Adam Tamas Tuboly would like to express his indebtedness to the Institute of Behavioural Sciences at the Medical School of the University of Pécs and Dániel Bárdos to the Department of Philosophy and History of Science at the Budapest University of Technology and Economics.
Jacob Stegenga
University of Cambridge
Jacob Stegenga is a Reader in the Department of History and Philosophy of Science at the University of Cambridge. He has published widely on fundamental topics in reasoning and rationality and philosophical problems in medicine and biology. Prior to joining Cambridge he taught in the United States and Canada, and he received his PhD from the University of California San Diego.
About the Series
This series of Elements in Philosophy of Science provides an extensive overview of the themes, topics and debates which constitute the philosophy of science. Distinguished specialists provide an up-to-date summary of the results of current research on their topics, as well as offering their own take on those topics and drawing original conclusions.