Introduction
The question of how rational humans are has been a central focus of philosophical and scientific debates for centuries, with disciplines such as behavioral economics, public policy and psychology now at the forefront. Rationality has become the epicenter of major shifts brought about by behavioral sciences, particularly behavioral economics and behavioral public policy (BPP), as these fields challenge traditional understandings of decision-making and reshape approaches to explaining and guiding human behavior.
In the late 19th and early 20th centuries, economists like Alfred Marshall, Leon Walras and Vilfredo Pareto formalized the assumption that agents possess complete rationality, operating with perfect information and making optimal decisions guided by utility maximization and cost–benefit analysis. During the second half of the 20th century, Herbert Simon (Reference Simon1947, Reference Simon1956, Reference Simon1957) challenged this model by proposing that our rationality is bounded, which accounts for cognitive limitations, memory constraints and the complexity of the environment. Simon’s work redefined rationality under a context-dependent empirical framework, laying the foundation for later developments in behavioral economics, including the study of heuristics and decision-making under uncertainty.
Following Simon, Amos Tversky and Daniel Kahneman argued that human rationality is not that it is bounded, but limited and also systematically prone to predictable errors (Tversky and Kahneman, Reference Tversky and Kahneman1974). Kahneman’s Nobel Prize lecture (Kahneman, Reference Kahneman2003), explicitly acknowledges Simon’s foundational contributions to the concept of bounded rationality. Esther-Mirjam Sent also recognizes this influence (Sent, Reference Sent2004). However, the Heuristics-and-Biases (H&B) research program that Kahneman and Tversky developed diverged from Simon’s emphasis on satisficing and adaptation, instead concentrating on systematic cognitive biases and the limits of human rationality. Their perspective significantly influenced public policy through the work of Richard Thaler and Cass Sunstein, whose book Nudge (Thaler and Sunstein, Reference Thaler and Sunstein2008) introduced the notion of choice architecture as a tool for leveraging these cognitive biases to improve societal well-being.
In contrast, Gerd Gigerenzer’s theory of Ecological Rationality (ER), also grounded in Simon’s work and more closely aligned with his original perspective, emphasizes the interaction between cognitive mechanisms and the structure of the environment in which decisions are made. Gigerenzer (Reference Gigerenzer2008) argues that heuristics are not flawed shortcuts, but adaptive tools finely tuned to particular contexts, enabling effective decision-making under uncertainty (Gigerenzer, Reference Gigerenzer2019). This framework challenges the H&B program by reframing many so-called biases as intelligent and efficient strategies, provided they are evaluated in light of the environments in which human reasoning actually operates. Crucially, Gigerenzer’s account takes the environment in a naturalistic sense, drawing on evolutionary psychology and biological adaptation, in contrast to Vernon Smith’s version of ER, which locates adaptation in market institutions and their capacity to aggregate dispersed knowledge (Smith, Reference Smith2003; Dekker and Remic, Reference Dekker and Remic2019). The tension between these perspectives – Kahneman’s logical rationality, which underpins the H&B approach, and Gigerenzer’s evolutionary ecological framework – reflects broader debates about how rationality should be defined and how these definitions shape public policy interventions. These disputes, often referred to as the ‘rationality wars’ (Samuels et al., Reference Samuels, Stich, Bishop and Elio2002; Gigerenzer, Reference Gigerenzer2024), highlight fundamental differences in how researchers conceptualize rationality. While Kahneman and Tversky uphold the logical coherence of Rational Choice Theory as a normative benchmark (Dekker and Remic, Reference Dekker and Remic2019, p. 301), Gigerenzer critiques this model and instead advocates for a broader, context-sensitive ecological understanding.
Despite their disagreements, both the H&B and ER traditions can be seen as contributing to a broader account of rationality. The H&B program acknowledges that humans often rely on intuitive heuristics but evaluates them against normative standards such as logic or probability theory. The ER approach, while recognizing the relevance of those standards in structured ‘small world’ contexts, emphasizes adaptiveness in uncertain, real-world environments. In this sense, neither side denies the complexity of rationality; rather, each highlights different dimensions depending on their theoretical goals and experimental designs. The conflict arises not from rejecting pluralism, but from differing views about which standards should take epistemic and methodological precedence. The ‘war’, then, centers on which model should guide research, policy and judgment in real-world contexts.
Rather than asking whether humans are rational, to what degree, or whether heuristics should serve as normative benchmarks, this paper develops an epistemological framework for understanding the diverse ways researchers – whether in the ‘rationality wars’ or in alternative approaches to human behavior – employ the concept of rationality. To situate the debate, it first sketches a historical and philosophical backdrop, drawing on Plato, Aristotle, Kant, Hume, Weber and others, to highlight enduring distinctions between theoretical and practical rationality and to examine the tension between logical and ecological accounts. This section shows that these conflicts are not new and that philosophy already has managed them in integral frameworks without the need for reductionism. Building on this foundation, it introduces Gustavo Bueno’s distinction between concepts and ideas (Bueno, Reference Bueno1993): concepts function as discipline-specific tools for precise analysis, while ideas are integrative constructs that transcend disciplines and enable broader philosophical reflection. This distinction helps clarify how rationality operates both as a technical term within scientific fields and as a transdisciplinary idea shaping normative and epistemological concerns.
The paper then critiques the reductionism found in both sides of the rationality wars, arguing that these disputes often rest on the mistaken assumption that one discipline can define rationality for all contexts. Such a stance overlooks the plural and context-sensitive nature of rationality and reduces its epistemological complexity to the confines of a single framework. Rationality, the paper argues, is not a unified entity but a multifaceted phenomenon shaped by diverse disciplinary goals and standards. Recognizing this plurality is essential for moving beyond disciplinary turf wars and toward a more nuanced, interdisciplinary understanding.
Philosophical foundations of a diverse rationality
The apparent conflicts in contemporary behavioral sciences have deep roots. Philosophy has long distinguished among irreducible forms of rationality, such as theoretical and practical, instrumental and axiological, or logical and ecological. What are now described as ‘rationality wars’ are better understood as the latest expressions of these older tensions. The following section aims to show that such tensions are not new, and that many authors have developed comprehensive frameworks which recognize this plurality without reducing one form of rationality to another.
Plato offers one of the earliest systematic accounts of rational pluralism. In The Republic (Book IV, 436a–441c), he describes the tripartite soul (reason, spirit and appetite) whose harmony depends on reason’s governance. Disorder occurs when spirit allies with appetite, producing excess at the expense of rational pursuit of the good. In the Phaedrus, this image is refined through the charioteer allegory: reason must guide two horses, one noble (spirit) and the other unruly (appetite). Crucially, appetite is not dismissed; when properly directed, it fuels the soul’s ascent. Rationality here is not mere calculation but integration of cognitive, affective and ethical dimensions, an early recognition that human agency cannot be reduced to logic alone.
Aristotle advanced this pluralism with his distinction between theoretical wisdom (sophia), which seeks universal truths, and practical wisdom (phronesis), which governs ethical action and situational judgment. Sophia aligns with the context-independent rationality of logic or probability models, while phronesis resembles context-sensitive approaches like ER, emphasizing habituation, emotional regulation and experience. Aristotle thus anticipated the modern tension between universal benchmarks of reasoning and adaptive, situated judgment. Unlike later thinkers who treat these domains as in conflict, he presents them as complementary aspects of a flourishing life.
This dual structure reappears in early modern philosophy. Hume argued that reason is the ‘slave of the passions’ and that moral motivation stems from affect rather than abstract deliberation. Kant, by contrast, insisted on reason’s autonomy, distinguishing theoretical and practical reason as separate yet equally necessary domains: one governing knowledge, the other moral obligation. Together they reveal an enduring problem: rationality is simultaneously descriptive of human cognition and normative of ethical life, and no single definition captures both.
As the concept of rationality moved from individual cognition to social analysis, Max Weber expanded its pluralism further. In Economy and Society (1978 [1922]), Weber offered a typology of human action: instrumental rationality (Zweckrationalität), value-rationality (Wertrationalität), traditional action and affectual action. These categories span calculated efficiency, ethical commitment, cultural habit and emotional impulse. Weber’s account revealed that rationality comes in different forms, some goal-directed and others value-driven, some reflective and others habitual. His insight was that rationality cannot be reduced to instrumental logic; that is, it is often shaped by social norms, culture and historical context. Rationality, therefore, manifests normatively and descriptively in a plurality of ways.
Raymond Boudon (Reference Boudon2003) introduced the idea of axiological rationality (actions guided by ethical values rather than material outcomes) arguing that instrumental rationality is too narrow to explain norm-driven behavior. Building on this, Echeverría and Álvarez (Reference Echeverría, Álvarez and Agazzi2008) developed a theory of Bounded Axiological Rationality, combining value-based decision-making with the constraints emphasized in bounded rationality, and showing that instrumental rationality alone cannot capture the ethical dimensions of human action.
These refinements echo a much older philosophical lineage, from Plato and Aristotle to Weber, which continues to inform behavioral science. The enduring divide between theoretical and practical rationality reappears today in the contrast between idealized models of perfect rationality and bounded or ecological approaches. This divide, however, cannot be understood apart from its historical context. As Erickson et al. (Reference Erickson, Klein, Daston, Lemov, Sturm and Gordin2013) have shown, Cold War behavioral science prioritized predictive, formalized models of behavior for policy and defense. Systems thinking, game theory, and cybernetics shaped the work of figures like Simon and Kahneman, embedding assumptions of control, optimization, and rational choice into psychology and economics. These were not merely abstract scientific models but responses to geopolitical pressures that elevated calculable, controllable forms of rationality. Against this backdrop, bounded and ecological accounts emerged as partial correctives, reclaiming a vision of rationality grounded in situated cognition, adaptiveness and context.
Some contemporary accounts revive a formal view of rationality. Steven Pinker’s Rationality (Pinker, Reference Pinker2022) defends reason in terms of Bayesian inference, logic and cost–benefit analysis, treating coherence and optimization as universal standards. This position aligns with neoclassical economics and decision theory but sidelines the practical, ethical and context-sensitive dimensions emphasized by Aristotle, Weber or Gigerenzer. By contrast, other theorists, such as Rizzo and Whitman (Reference Rizzo and Whitman2019) and Dold et al. (Reference Dold, Harper, Rajagopalan and Whitman2024), propose a process view of rationality. They stress tacit knowledge, situational judgment and adaptability, aligning more closely with Aristotle’s phronesis. Rather than measuring reasoning against static norms, these accounts treat rationality as context-responsive and morally embedded. Taken together, these contrasting perspectives show that the divide between context-independent and context-sensitive rationality is not something to be overcome, but a recurring expression of irreducible categories with deep philosophical roots.
These historical parallels show that today’s rationality debates are not mere technical disputes but part of a longer philosophical conversation. The enduring divide between context-independent and context-sensitive reasoning has persisted from antiquity to the present, resisting attempts at reduction. Recognizing this continuity clarifies that the so-called rationality wars are not new and genuine conflicts to be resolved, but the recurrence of parallel traditions. For public policy, this means acknowledging rationality as both logical and embodied, abstract and practical, always plural and inseparable from its historical roots.
Rationality wars today: logical rationality, ecological rationality and the heuristics-and-biases program
In the second half of the 20th century, logical rationality became the dominant framework in economics and other social sciences. As Gigerenzer notes in his personal reflection about the wars (Gigerenzer, Reference Gigerenzer2024), it rests on principles such as consistency axioms, subjective expected utility maximization and Bayesian updating. These formal tools assume that individuals act rationally by choosing always the highest ranked available option and updating beliefs in a logically consistent way.
Karl Polanyi (2001 [Reference Polanyi2001 [1944]], p. 45) warned that detaching economics from its social and historical context produced abstract, universal models of human behavior. While mathematically rigorous, this formalized view assumed that behavior remains constant across contexts, an assumption later challenged by bounded and ER perspectives.
Herbert Simon explored how decisions are made within administrative organizations (context-dependent), critically challenging the traditional concept of the fully rational ‘economic agent’. In Chapter 2 of Administrative Behavior (Simon, Reference Simon1947), Simon introduced the notion of ‘limits to rationality’, underscoring the discrepancies between classical administrative science and economics in their treatment of rationality. A decade later, in Models of Man (Simon, Reference Simon1957), he reframed this idea as ‘bounded rationality’, a term that highlighted the cognitive and contextual constraints on human decision-making while avoiding the negative connotations associated with the term ‘limits’.
Simon used ‘bounded’ to challenge the standard economic paradigms, emphasizing how decision-making is shaped by and adapts to specific environmental conditions (Simon, Reference Simon1956). Organisms do not maximize; they satisfice. Simon argued that rationality must be understood in context, as choices that seem irrational in isolation often make sense within specific environments. He illustrated this with the example of an ant navigating a beach: its erratic path may appear irrational until we account for the uneven terrain. The ant’s behavior reflects the structure of its environment, showing that rationality emerges from the interaction between agent and context (Simon, Reference Simon1996, p. 51).
Tversky and Kahneman (Reference Tversky and Kahneman1974) reframed the concept of ‘bounded rationality’ as inherent ‘limitations’ that make us susceptible to irrationality. In their H&B research program, they acknowledged the advantages of using heuristics in decision-making but highlighted how these shortcuts often lead to systematic errors and irrationalities, which became the focus of their research. Through their descriptive perspective on bounded rationality, Kahneman and Tversky showed experimentally that human reasoning is bounded and systematically biased, yet they continued to uphold logical coherence as the normative standard of rationality. The H&B program treated Bayes’ rule and related principles of probability theory as context-independent normative standards. Human reasoning was judged rational only to the extent that it conformed to these formal rules of coherence, and systematic departures were classified as biases or ‘violations’ of Bayes’ rule: ‘In a sharp violation of Bayes’ rule, the subjects in the two conditions produced essentially the same probability judgments’ (Tversky and Kahneman, Reference Tversky and Kahneman1974, pp. 1124–1125). Calling it a ‘violation’ implies that Bayes’ theorem is the standard of correct reasoning. Similarly, in their development of prospect theory, they emphasized that ‘the normative model of decision making under risk is expected utility theory… [but] it is not an adequate descriptive model of choice’ (Kahneman and Tversky, Reference Kahneman and Tversky1979, p. 263). In their framework, rationality ideally (normatively) means consistency with the rules of logic and probability, regardless of context:
The study of decisions addresses both normative and descriptive questions. The normative analysis is concerned with the nature of rationality and the logic of decision making. The descriptive analysis, in contrast, is concerned with people’s beliefs and preferences as they are, not as they should be. The tension between normative and descriptive considerations characterizes much of the study of judgment and choice (Kahneman and Tversky, Reference Kahneman and Tversky1984, p. 341).
That is, assuming there are two moments, the descriptive (how our rationality actually works) and normative (how it should work), Tversky and Kahneman showed that in the descriptive realm, human reasoning is limited and repeatedly characterized in terms of error. Across their writings they used a striking range of labels: ‘erroneous intuitions’ (Tversky and Kahneman, Reference Tversky and Kahneman1971, p. 105), ‘sins against the logic of statistical inference’ (Tversky and Kahneman, Reference Tversky and Kahneman1971, p. 110), ‘systematic biases’ (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 209), ‘illusory correlations’ (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 223), flaws of reasoning (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 229), ‘errors of human judgement’ (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 231), ‘systematic and predictable errors’ (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1131), ‘the illusion of validity’ or predictions ‘quite likely to be off the mark’ (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1126), ‘erroneous and harmful conclusion’ (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1127), ‘cognitive biases’ and ‘errors of judgement’ (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1130), ‘preferences systematically violate the axioms of expected utility theory’ (Kahneman and Tversky, Reference Kahneman and Tversky1979, p. 263), ‘violations’ (Kahneman and Tversky, Reference Kahneman and Tversky1979, p. 285), ‘anomalies’ (Kahneman and Tversky, Reference Kahneman and Tversky1984, p. 341), ‘failures’ (Kahneman and Tversky, Reference Kahneman and Tversky1984, p. 343) and even consistent violations of ‘rational choice’ (Kahneman, Reference Kahneman2011, p. 10). They consequently underlined the normative weight of these findings: departures from expected utility ‘must lead to normatively unacceptable consequences, such as inconsistencies, intransitivities, and violations of dominance’ (Kahneman and Tversky, Reference Kahneman and Tversky1979, p. 277). While Kahneman later clarified that ‘humans are not irrational’ (Kahneman, Reference Kahneman2011, p. 411), the persistent use of the above-mentioned terms applied broadly to ‘the human mind’ (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 229), the ‘human condition’ (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1127), and to both naïve and sophisticated respondents (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1130; Kahneman and Tversky, Reference Kahneman and Tversky1984, p. 343), left little doubt that the H&B program cast ordinary reasoning as systematically deficient relative to context-independent standards of logic and probability.
Even if Kahneman later reflected that, by this definition, reasonable people cannot be fully ‘rational’, and this benchmark is ‘too demanding, because coherence is psychologically unrealizable, and too permissive, because it ignores phenomena such as regret’ (Herfeld, Reference Herfeld and Herfeld2025), it is clear that rationality meant context-independent adherence to probability theory and expected utility; departures from these standards were systematically classified as errors.
Even in their early work, however, traces of this tension appeared. Discussing availability, Tversky and Kahneman admitted that the heuristic was ‘an ecologically valid clue for the judgment of frequency’ because frequent events are typically easier to recall than infrequent ones. Yet they immediately stressed that availability is also influenced by irrelevant factors, and therefore ‘the use of the availability heuristic leads to systematic biases’ (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 209). This ambivalence – acknowledging adaptiveness but foregrounding error – anticipated Kahneman’s later struggle with the coherence benchmark.
As Banerjee and Dold (Reference Banerjee and Dold2025) emphasize, Kahneman’s late turn toward ‘reasonableness’ suggests he was searching for a more human-centered standard of rationality, though they argue this effort remained unfinished. He thus moved from an early stance in which deviations from Bayesian or expected utility norms were treated as errors, to a later recognition that many such deviations may in fact be reasonable and should not automatically be branded irrational. This resonates with Raymond Boudon’s notion of expressive rationality (Boudon, Reference Boudon1994, Reference Boudon1998), which likewise highlights that rationality involves not only formal coherence but also the ability to justify decisions in terms of values and reasons intelligible to oneself and others.
At the same time, the central contribution of the H&B program was to show that human errors are not random but systematic – observable, predictable and exploitable. This insight provided the foundation for Richard Thaler and Cass Sunstein’s approach to choice architecture, culminating in Nudge (Thaler and Sunstein, Reference Thaler and Sunstein2008). By treating biases as predictable deviations, policymakers could redesign environments to correct them, promoting choices that align with people’s own goals and enhancing well-being.
An alternative response to Simon’s legacy moved in the opposite direction. Gigerenzer developed the concept of ER, emphasizing the adaptive fit between cognitive strategies and environmental contexts. He critiques the H&B program’s ‘bias bias’ – the tendency to overemphasize errors relative to normative models like Bayesian reasoning (Gigerenzer, Reference Gigerenzer2018). For Gigerenzer, heuristics are not flaws but adaptive tools that perform remarkably well when matched to the structure of the environment. Errors arise only when there is a poor fit between heuristic and context, not because human rationality itself is deficient. Importantly, Kahneman and Tversky themselves rarely, if ever, used the label irrational; they spoke instead of biases, systematic errors and violations of rational-agent models. The widespread perception that their work depicted humans as irrational was largely the product of interpretation by others (including Gigerenzer), who read the reliance on strict coherence benchmarks as implicitly branding ordinary decision-makers irrational.
This is where a crucial distinction emerges: for Kahneman, every irrationality is an error of rationality, but not every error of rationality is irrationality. Heuristics may produce errors when judged against context-independent standards, yet in practice they often yield quick, efficient, and reasonable decisions. For the H&B program, they were errors of rationality; for ER, they are not errors at all but adaptive strategies. What counts as an ‘error’, then, depends less on psychology than on the epistemological frame one adopts.
Summing up, the ‘rationality wars’ (Samuels et al., Reference Samuels, Stich, Bishop and Elio2002) center on three main positions: the logical rationality of standard economics, which assumes a context-independent model based on consistency and maximization; the heuristics-and-biases program, which empirically documents actual decision-making while still retaining logical rationality as the normative benchmark; ER, introduced by Simon and expanded by Gigerenzer, which critiques both traditions through an evolutionary framework. Samuels and colleagues argue, however, that the wars are overstated: once we distinguish between types of rationality (e.g., procedural vs. substantive), the H&B program and Gigerenzer’s heuristics approach appear less opposed than complementary, with each highlighting different aspects of cognitive performance under varying conditions.
Call it war, conflict, battle, dispute or a mere misunderstanding, one thing is clear: the relationship between logical and ER has profound implications for BPP. The neoclassical economics approach, grounded in logical rationality, underpins many policy tools, including nudges, which assume a universal model of human behavior aligned with normative principles of maximization and consistency. As Thaler and Sunstein make clear in their book Nudge (Thaler and Sunstein, Reference Thaler and Sunstein2008), their benchmark is the mythical Econs, who make choices based on the rational assessment of the expected costs and benefits of each option (Thaler and Sunstein, Reference Thaler and Sunstein2008, pp. 6–7). As they argue, ‘unlike Econs, humans predictably err’ (Thaler and Sunstein, Reference Thaler and Sunstein2008, p. 7). One of the reasons behind nudges is precisely that humans ‘are not homo economicus; they are homo sapiens’ (Thaler and Sunstein, Reference Thaler and Sunstein2008, p. 7), and thus need to be steered toward the choices they would make if they were Econs: the paternalistic aspect of nudges ‘lies in the claim that it is legitimate for choice architects to try to influence people’s behavior in order to make their lives longer, healthier, and better [considering that] individuals make pretty bad decisions – decisions they would not have made if they had paid full attention and possessed complete information, unlimited cognitive abilities, and complete self-control’ (Thaler and Sunstein, Reference Thaler and Sunstein2008, p. 5), that is, they would have made if they were Econs, normative benchmark for their claims. However, Gigerenzer critiques this approach, arguing that nudges, particularly those rooted in the H&B framework, manipulate individuals by exploiting their cognitive limitations without equipping them to make better decisions independently. The ER approach advocates for an alternative approach centered on ‘boosts’ (Grüne-Yanoff and Hertwig, Reference Grüne-Yanoff and Hertwig2016) and educational tools, such as fostering ‘risk savvy’ individuals who can navigate complex environments using adaptive heuristics (Gigerenzer, Reference Gigerenzer2014). Gigerenzer emphasizes empowering individuals with the knowledge and skills to make decisions that align with their own values and goals, rather than relying on paternalistic interventions. This divergence challenges BPP to reconcile these perspectives: should it prioritize correcting the above-mentioned perceived deviations through nudges, or should it embrace ER by fostering tools that enhance individuals’ natural adaptive capabilities? In this sense, the very conception of rationality one holds deeply shapes how policy problems are defined, and which tools are considered legitimate or effective. A framework rooted solely in logical rationality may overlook the contextual, emotional and cultural dimensions of real-world decision-making. Balancing these approaches could lead to more nuanced and ethical policy solutions that respect autonomy while addressing diverse decision-making contexts.
Rationality: a concept and an idea
In the social sciences, rationality is most often understood in instrumental terms (Nozick, Reference Nozick1993, p. 133): it tells you how to get to your goals, not what those goals should be. Yet in practice, especially in economics, this instrumental view is often coupled with the assumption that the relevant goal is to maximize utility. This has led some critics to argue that the social sciences reduce rationality to a narrow, utilitarian framework, whereas others maintain that rationality is plural and manifests in ways that go beyond the instrumental model. This is the case of Max Weber (2019 [Reference Weber2019 [1921]]) or Raymond Boudon (Reference Boudon1998) who, critiquing the reductionism of rational choice theory, argued that rationality also involves having reasons for our beliefs or actions, reasons that may stem from our conviction that something is the right thing to do, regardless of its outcomes. If we define rationality only in instrumental terms, he warns, we fail to account for decisions that are guided not by expected consequences but by principles or values. People often act not because of what will result from their actions, but because those actions align with what they believe is right, and this, too, constitutes a form of rational action.
These differences in how researchers approach the term ‘rationality’ are not merely semantic; they reveal deeper tensions among epistemic disciplines in their underlying assumptions about human behavior, knowledge, and decision-making. To make sense of these divergent uses, we need a framework that does not prematurely unify them but instead clarifies how they function differently across domains. Spanish philosopher Gustavo Bueno’s distinction between concepts and ideas (Bueno, Reference Bueno1993) provides one such framework. Concepts, in this view, are operational tools developed within specific scientific disciplines, bounded by the methods, goals, tools and internal logics of each specific field, such as economics, cognitive psychology or sociology. Ideas, by contrast, are open-ended philosophical constructs that allow us to reflect on, compare, and sometimes integrate these varied uses.
Understanding rationality as both a set of discipline-specific concepts and a broader philosophical idea helps explain why no single definition suffices and why debates like those between Kahneman and Gigerenzer persist. This view highlights pluralism within and across disciplines and supports a non-reductionist stance, not by assumption, but due to the limits of any single framework to fully capture rationality’s normative and descriptive dimensions. The approach taken here examines, therefore, how disciplines define rationality to justify knowledge and what assumptions they make about human reasoning and agency.
To see the broader implications of this distinction, note that rationality is not unique in its plurality. Other interdisciplinary notions shift meaning across contexts: in political science, freedom refers to civil liberties and institutional guarantees; in psychology, to the experience of agency; in physics, to the unconstrained motion of particles. Similarly, entropy in thermodynamics refers to the number of possible microstates of a system (often described as disorder); in biology, it is invoked more metaphorically to denote instability or structural breakdown; in information theory, it measures uncertainty; and in ecology, it is used to capture diversity and sometimes systemic stability.
Building on this, ideas in Bueno’s framework transcend field-specific uses. They function as second-order constructs that relate concepts across domains, emerging from the friction among scientific, artistic or even mythological practices. Ideas act as bridges that connect disciplines, enabling synthesis and interdisciplinary reflection. While concepts are precise and context-bound, ideas operate at a higher level, examining overarching structures and tensions that cross disciplinary boundaries.
From this philosophical standpoint, working with ideas means revisiting familiar concepts and examining the tensions within and between them. Take the question ‘Are we rational?’ – a deceptively simple query with divergent disciplinary answers. Behavioral economists cite experiments like the Ultimatum Game or the Asian Disease problem, showing deviations from expected utility theory. Cognitive neuroscientists look at the neural bases of decision-making; evolutionary biologists treat heuristics as adaptive traits; anthropologists emphasize the role of norms in shaping reasoning. Each reflects the assumptions of its field. Yet beyond these perspectives lies a broader philosophical inquiry: how the very idea of rationality is constructed, justified and contested across disciplines and cultures.
This distinction between concepts and ideas is especially useful for terms like rationality: clear in isolated contexts but unstable when stretched across fields. Instead of forcing a single definition, it invites us to see how rationality functions differently depending on epistemic aims and methods. Apparent disagreements are not mere terminological issues but reflect deeper normative and methodological tensions that cannot be solved by definition alone.
Rationality wars and disciplinary divergences
Rationality’s meaning, therefore, shifts across different scientific and disciplinary contexts. In an attempt to unify the sides of the war, Samuels et al. (Reference Samuels, Stich, Bishop and Elio2002) argued that the apparent conflict between the H&B tradition and the ER program is overstated, suggesting the two approaches are not genuinely at odds but simply address different facets of the same phenomenon. However, this interpretation underestimates the depth of their disagreement: what appears as an internal scientific debate is in fact grounded in distinct disciplinary logics and epistemic goals. The H&B framework, rooted in cognitive psychology, emphasizes deviations from formal norms of logic and probability; the ecological perspective, influenced by evolutionary psychology and adaptive systems theory, defines rationality in terms of environmental fit and real-world success. These are not interchangeable frames, nor do they offer merely complementary insights into a unified notion of rationality (pessimistic/optimistic). Rather, they instantiate different scientific constructs, each valid within its own methodological paradigm, that reflect related but often incompatible commitments to what counts as rational thought and behavior.
The plural expressions of rationality clarify why debates like those between H&B and ER are not merely empirical disagreements but reflect broader theoretical commitments. Each disciplinary concept of rationality operates within distinct methodological assumptions and normative frameworks. Recognizing rationality as both a concept (shaped by scientific practice) and as a philosophical idea (that frames these practices in relation to one another) is not a matter of abstraction; it is essential for achieving conceptual clarity in applied fields like BPP, where these divergent models of human behavior are routinely brought into contact.
These conceptual differences are reinforced by divergent experimental practices that shape how rationality is defined and measured in each tradition. As Lejarraga and Hertwig (Reference Lejarraga and Hertwig2021) note, the rationality wars are shaped not only by theory but also by experimental method. The H&B tradition typically relies on controlled tasks that expose deviations from normative logic, while ER emphasizes success in real-world environments.
Recent philosophical accounts further support the view that rationality cannot be reduced to a single disciplinary framework. Searle (Plato, 2001) critiques the belief-desire model foundational to neoclassical decision theory, arguing that human rationality includes normative commitments like obligation and intention that are not reducible to instrumental calculation. Stich (Reference Stich1990), in turn, proposes a radical cognitive pluralism, rejecting the idea of a universal reasoning standard and challenging the coherence of folk psychological notions of rationality. Meanwhile, Mercier and Sperber (Reference Mercier and Sperber2017) offer a functional reinterpretation: they argue that reason evolved not to optimize individual decisions but to facilitate argumentation and justification in social contexts. Taken together, these perspectives question both logical and ecological models of rationality as complete accounts, reinforcing my claim here: that rationality must be understood as both a set of diverse scientific constructs (concepts) and a philosophical idea that transcends them.
Some scholars have also proposed a less adversarial interpretation of the rationality wars. For example, Sturm (Reference Sturm2012) suggests that what appear to be conflicting views might instead represent distinct but compatible layers in a more comprehensive theory of rationality. Beyond definitions, these debates reflect deeper methodological tensions. As Sturm (Reference Sturm2012) notes, disagreement arises not only over normative benchmarks but also over what constitutes acceptable evidence or proper experimental design. These methodological choices help define what each tradition counts as ‘rational’, deepening the conceptual divide. Rather than choosing between logical coherence and adaptive efficiency, this approach acknowledges that different models may serve different explanatory or normative purposes across varied contexts.
To illustrate what I am proposing here, consider the following example regarding the debate between determinism and free-will in human agency. For a neuroscientist like Robert Sapolsky, the concept of free-will may be dismissed entirely, consistent with a deterministic view of brain function (Sapolsky, Reference Sapolsky2023). But in ethics, free-will underpins moral responsibility; in theology, it is intertwined with the notion of sin; and in political theory, it is central to autonomy, accountability and citizenship. Absolutely rejecting free-will across different epistemic fields and sub-fields on neuroscientific grounds implicitly elevates one conceptual framework over others, potentially marginalizing (in a reductionist manner) the contributions of disciplines that examine different dimensions of the same term under various methodological approaches and epistemic frames.
Recognizing this plurality is crucial. It reminds us that while specialized knowledge provided by the clear demarcation of scientific disciplines is powerful, it often loses coherence when transplanted across contexts without philosophical reflection. Ideas serve as bridges among such disciplinary concepts, enabling a more comprehensive understanding of complex human phenomena. For instance, when authors like Dan Ariely title their books Predictably Irrational or Irrationality: A History of the Dark Side of Reason, they appeal to the philosophical idea of rationality to frame human behavior in broad, cross-contextual terms. Yet the explanations they offer tend to rely on the conceptual tools of behavioral economics or cognitive psychology. Philosophically speaking, the idea of rationality cannot be confined to one explanatory register. Attempts to do so flatten its complexity, silencing the plural voices – scientific and normative – that have long shaped its meaning. Accordingly, the demarcation of each epistemological discipline demands that rationality as a concept should stay within its borders. When we consider an interdisciplinary approach, rationality stops being a concept (with all epistemological benefits that conveys) and becomes a philosophical idea that cannot be used to refute disciplinary concepts.
Interdisciplinary tensions and category mistakes
To better understand the risks of disciplinary overreach, let us consider how category mistakes arise when concepts are misapplied across fields, flattening the richness of interdisciplinary inquiry. A helpful way to illustrate this issue is through a classroom analogy: imagine a student is asked to analyze a sentence written on the board. The teacher expects a comment on subject–verb agreement or syntactic structure. Instead, the student raises their hand and proudly announces, ‘The sentence is composed of chalk, that is, calcium carbonate, CaCO₃.’ Technically correct, but entirely beside the point. The student has mistaken the material substance for the object of analysis, confusing chemical composition with linguistic function. In the same way, when scholars reduce the idea of rationality to the confines of their own discipline (say, neural pathways, cognitive errors or mathematical models) they may offer accurate insights within their domain but fail to address the broader conceptual terrain.
For BPP, the challenge is not choosing a single definition of rationality but recognizing that different disciplinary accounts highlight different aspects of behavior and justify different kinds of intervention. As a multidisciplinary subfield drawing on economics, psychology, evolutionary theory, political science and philosophy, BPP must treat these perspectives as complementary rather than exclusive. The H&B program highlights systematic biases relative to formal norms, justifying nudges and incentives that counter predictable deviations. Logical rationality supplies coherence benchmarks such as probability theory and expected utility, which are most applicable in contexts of risk. By contrast, ER emphasizes the adaptive match between heuristics and environments, grounding boosts that strengthen decision-making capacities, especially in situations of uncertainty. Axiological rationality adds another layer, reminding us that policies are not only technical fixes but also draw on values and social norms. Depending on the problem, interventions may need to operate in parallel, follow a sequence, or, more rarely, rely on a single dominant tool – but no framework or instrument suffices on its own.
This pluralistic view becomes clearer when we look at practice. In retirement savings, for example, nudges like automatic enrollment could address the procrastination identified by H&B, while boosts such as simple heuristics (‘save 15% or at least the employer match’) reflect ER, and matching schemes can provide institutional reinforcement consistent with logical rationality. In vaccination campaigns, defaults and reminders can reduce cognitive frictions highlighted by H&B, while community-centered trust-building reflects ER and axiological rationality, showing how interventions work best when adapted to heuristics, social norms and values. In energy conservation, social-norm nudges (logical/axiological rationality) are complemented by boosts that teach actionable rules of thumb (ER) and institutional mechanisms like time-of-use pricing (logical rationality) to create durable change.
In each case, what makes the intervention effective is not one single definition of rationality but the ability of BPP to treat rationality as plural and applying interventions that respect this plurality. This pluralism avoids reductionism and can equip BPP with a more flexible, context-sensitive portfolio of tools, capable of matching the epistemic logic of each discipline to the behavioral challenge at hand.
From pluralism to integration: toward an inclusive framework
Recognizing the conceptual plurality of rationality is only the first step. What follows is a closer look at how different disciplines can coexist without collapse, contributing complementary insights toward a more integrative view. Such disciplinary layering reinforces the central claim of this paper: rationality is not a unitary construct but a multifaceted concept that takes on different attributes depending on disciplinary context and epistemological goals. David McFarland’s tripartite distinction in The Biological Bases of Economic Behavior (McFarland, Reference McFarland2016), building on Kacelnik’s (Reference Kacelnik, Hurley and Nudds2006) evolutionary framework, offers a compelling illustration. He distinguishes between P-rationality (involving conscious deliberation), E-rationality (emphasizing efficiency) and B-rationality (relating to biological fitness). Using the example of a dog catching a mouse, McFarland shows how a single behavior can be interpreted as rational in different ways depending on whether the explanation focuses on reasoning, efficiency or evolutionary success. This model reinforces the view that rationality must be analyzed through multiple conceptual lenses, each valid within its own domain.
A similar effort to broaden the conceptual scope of rationality appears in Rizzo and Whitman’s theory of inclusive rationality. Their approach attempts to integrate psychological and contextual factors into the rational decision-making models of neoclassical economics. However, unlike the framework proposed here, which emphasizes that rationality must be treated differently according to the epistemological assumptions and methods of each field, Rizzo and Whitman pursue synthesis under a unified conceptual umbrella (Rizzo and Whitman, Reference Rizzo and Whitman2019, p. 17). While this ambition has clear merits, it still presumes that a general model can accommodate all perspectives without loss.
At this point, critics might contend that these conceptual models refer to fundamentally different phenomena. However, despite disciplinary differences, they all ultimately address a common subject: human behavior and rationality. Human beings, as complex agents, are examined through multiple lenses – cognitive, social, moral and biological, each highlighting distinct aspects of our nature. This complexity not only justifies the existence of multiple, discipline-specific definitions of rationality but also reflects the methodological diversity required to explore its many dimensions.
This is precisely where Gustavo Bueno’s distinction between concepts and ideas becomes most useful. It helps clarify the pitfalls of trying to resolve philosophical questions using only disciplinary tools. Each field conceptualizes ‘rationality’ differently, just as it does with other abstract notions like ‘love’, which is understood variously in theology, psychology, biology or art. These disciplinary perspectives are not wrong, but they are partial. Each conceptualization is shaped by the methods and aims of its field and cannot be reduced to another without oversimplifying what is at stake. Just as the idea of ‘love’ transcends individual domains, so too does the idea of ‘rationality’. It offers a reflective, overarching framework that brings these perspectives into conversation, without collapsing them into one.
Seen in this light, the rationality wars may be less inevitable than they appear. Both scholars occasionally acknowledge the other’s perspective: Gigerenzer concedes that rational choice theory within the H&B framework can work in ‘small worlds’ or situations of risk (Gigerenzer, Reference Gigerenzer2018, p. 329), while Tversky and Kahneman recognized that some heuristics display ecological validity (Tversky and Kahneman, Reference Tversky and Kahneman1973, p. 209) and can at times be useful (Tversky and Kahneman, Reference Tversky and Kahneman1974, p. 1124). Such concessions reveal moments when each thinker implicitly moves beyond the strict boundaries of their disciplinary framework, when Kahneman speaks of ‘reasonableness’ or accepts that heuristics can sometimes be functional, and when Gigerenzer allows that logical rationality can serve as a standard in risky environments. In these moments, they adopt a broader, more philosophical standpoint, hinting at the central thesis of this paper: that both approaches are valid, but only within the bounds of their respective methodological frameworks and epistemic demarcations. Yet when framed as a war, with each side competing for an absolute definition of rationality, both perspectives risk slipping into disciplinary reductionism, elevating their own framework to the status of epistemological monism. Gigerenzer critiques Kahneman and current behavioral economics for clinging to logical norms as universal benchmarks and for reducing problems to ‘systematic biases’ while neglecting factors like market failures or criminal activities (Gigerenzer, Reference Gigerenzer2018, p. 305). Tversky and Kahneman, in turn, argue that Gigerenzer misrepresents their theoretical position and ignores key evidence (Tversky and Kahneman, Reference Tversky and Kahneman1996, p. 582). Gigerenzer situates heuristics instead within a broader ecological–evolutionary framework, arguing that they belong to an ‘adaptive toolbox shaped by evolution to deal with uncertainty’ and should be judged by their environmental fit rather than against universal logical norms (Gigerenzer, Reference Gigerenzer2010, p. 20). Although both Kahneman and Gigerenzer work within judgment and decision-making, their conflicting views on rationality arise from distinct scientific lineages. Kahneman, rooted in cognitive psychology, highlights how mental processes deviate from normative logic, relying on models of optimization. Gigerenzer, drawing on evolutionary psychology and Simon, stresses the adaptive fit between cognition and environment, rejecting universal norms in favor of context-sensitive heuristics. Economists, meanwhile, follow neoclassical models grounded in formalism and normative consistency. What appears as debate within a single field thus reflects deeper divergences in method, aims and philosophical assumptions.
Concluding remarks: interconnection, not reductionism
This paper has argued that the so-called rationality wars are neither new nor genuine wars, and that rationality is not a singular construct but a pluralistic and interdisciplinary one. Each field defines rationality according to its own methods, goals, and commitments; no single framework can exhaust it. Yet irreducibility does not mean fragmentation. On the contrary, disciplines are enriched by their interconnections, provided we resist the temptation to collapse one into another.
In BPP, where ethical, political, economic, and psychological perspectives converge, the need to distinguish between disciplinary concepts and the broader philosophical idea of rationality is particularly acute. Studies such as Sahlin and Brännmark (Reference Sahlin and Brännmark2013) and experiments on moral nudges (Capraro et al., Reference Capraro, Jagfeld, Klein, Mul and de Pol2019) illustrate the promise – but also the risks – of crossing boundaries without respecting them.
The current clash between logical and ER is rooted in long-standing philosophical debates, from Plato’s tripartite soul to Aristotle’s distinction between theoretical and practical reason. These enduring tensions, recast in behavioral science, show that the struggle to define rationality persists despite centuries of progress. Today, it appears as a disciplinary conflict: evolutionary psychology promotes ER, while behavioral economics defends logical rationality. Yet both risk reductionism by collapsing the broader idea of rationality into their own framework – whether as economic consistency or adaptive fitness.
Disciplines like BPP must therefore engage with rationality as both a multifaceted concept and a unifying idea. By balancing ecological and logical frameworks, while also acknowledging instrumental, axiological and social dimensions, policy design can become both context-sensitive and ethically grounded – enhancing not only practical success but also legitimacy.
Acknowledgments
I am grateful to Malte Dold, Mario Rizzo and Adam Oliver for their thoughtful suggestions and constructive comments, which significantly strengthened this work. I also thank the anonymous reviewers for their careful reading and valuable feedback.
Competing interests
I declare no conflict of interest.