1. Introduction
Recent advances in consciousness research, animal sentience research, neural organoids, and artificial intelligence (AI) have made the epistemological problem of other conscious minds—what, if at all, justifies the attribution of phenomenal properties to other entities—more relevant than ever. Because finding consciousness in such systems is likely to have significant ethical and societal implications (Birch Reference Birch2024; Levy Reference Levy2024; Shepherd Reference Shepherd2018; Siewert Reference Siewert1998), this problem has become especially pressing. Accordingly, tests for consciousness are repeatedly being discussed and suggested (Andrews Reference Andrews2024; Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Dung Reference Dung2022; Kazazian, Edlow, and Owen Reference Kazazian, Edlow and Owen2024; Negro and Mudrik Reference Negro and Mudrik2025; Schneider Reference Schneider2019) in an attempt to better understand the distribution of consciousness in nontrivial cases, both within the human population and outside of it. These populations include, among others, AI systems, nonhuman animals, neural organoids, infants, and fetuses (see table 1 for referrals to relevant literature on each population). Here we refer to these populations as nonstandard systems and frame the epistemological problem of other conscious minds in terms of other consciousnesses, asking how the current science of consciousness justifies generalizations about consciousness in these nonstandard systems.Footnote 1
Table 1. Key papers on the possibility of consciousness in nonstandard systems
Research on consciousness tests focuses mostly on the types of data (e.g., markers; Andrews Reference Andrews2024) that can serve as evidence for attributing consciousness to various target systems. Here we address instead the complementary issue of defining the reasoning that underlies justified attributions of consciousness to different target systems, independently of the type of evidence one decides to exploit.
Thus the driving question of this article targets the logical structure of the reasoning we employ to address the epistemological problem of other consciousnesses: What is the strongest inferential machinery we could use to justify the attribution of conscious properties to nonstandard systems? Agreement on the logical basis of our attribution practices is needed to clarify the argumentative structure that consciousness researchers ought to employ when concluding that a system has, or does not have, phenomenal properties. This is important both for assessing existing arguments for consciousness in nonstandard systems and for formulating future arguments of this sort.
Traditionally discussed within the more general problem of other minds, the epistemological problem of other consciousnesses has been approached through two different forms of reasoning: analogical reasoning and reasoning from the inference to the best explanation (IBE-based reasoning, for short). These are further rooted in two different inferential schemata: inductive inference for analogical reasoning and abductive inference for IBE-based reasoning. In the philosophical literature, these types of reasonings have often been presented as prima facie competing and incompatible. For example, Hyslop (Reference Hyslop and Hyslop1995) champions analogical reasoning while exhibiting the flaws of IBE-based reasoning, whereas Pargetter (Reference Pargetter1984) does the opposite.
This attitude is mirrored partly in the current consciousness science literature (for a related discussion focused on the science of animal consciousness, see Heyes Reference Heyes, Weiskrantz and Davies2008): On the one hand, scholars discussing and developing different consciousness tests (e.g., the command-following test; for discussion, see Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Owen et al. Reference Owen, Coleman, Boly, Davis, Laureys and Pickard2006) extrapolate consciousness via analogical reasoning; on the other hand, Tye (Reference Tye2017) suggests an inferential strategy that can be seen as similar to the IBE-based reasoning, while Chalmers (Reference Chalmers1996) and Passos-Ferreira (Reference Passos-Ferreira2023) explicitly adopt it.
However, as Melnyk (Reference Melnyk1994) has suggested, these two strategies are not logically incompatible, and indeed, in the current neuroscience of consciousness, many attributions of consciousness seem to incorporate aspects of both analogical reasoning and IBE-based reasoning (e.g., Barron and Klein Reference Barron and Klein2016). Birch (Reference Birch2022, 134) puts it explicitly: “What we should do … is build up a list of the behavioural, functional and anatomical similarities between humans and non-human animals, and use arguments from analogy and inferences to the best explanation to settle disputes about consciousness” (emphasis added).
Here we further develop this approach and provide a philosophical backbone to justify it, suggesting that the conjunction of analogical reasoning and IBE-based reasoning is the most promising approach when trying to determine which systems and organisms are conscious. We propose that the argument from analogy and the IBE-based argument are compatible and complementary and that they can be fruitfully combined to deal with the epistemological problem of other consciousnesses. We do so by introducing analogical abductive arguments and by showing that they can be used to overcome the problems that afflict analogical reasoning and IBE-based reasoning. We accordingly aim to provide a general structure for the “inferential machinery” (i.e., argument) that can be used to address the epistemological problem of other consciousnesses, independently of the specific fuel (i.e., test or evidence) one can put in that machine.
Hence our project has both descriptive and normative components. The descriptive goal of the project is to substantiate analogical abduction as a way for capturing and systematizing a type of inferential practice that relies on both analogical and IBE-based strategies for extrapolating consciousness. We do so by introducing a novel argument schema that incorporates elements of both strategies. The normative aspect of our analysis adds a further layer: We argue that analogical abduction is the most compelling inferential strategy for dealing with the epistemological problem of other consciousnesses. Accordingly, we suggest that consciousness science would benefit from adopting this form of reasoning to systematically build and assess arguments for the attribution of consciousness to nonstandard systems.
Moreover, we take the epistemological problem of other consciousnesses to be concerned primarily with the distribution of consciousness problem (i.e., is this system conscious or not?), rather than with the quality of consciousness problem (i.e., how is the system conscious/what is the system conscious of?) (Andrews Reference Andrews2024), so we will frame our discussion to address the distribution question. However, analogical abductive arguments can be leveraged to address the quality question as well.
In section 2, we present the two traditional forms of reasoning identified in the philosophical literature to approach the epistemological problem of other consciousnesses, namely, the argument from analogy and the IBE-based argument. In section 3, we introduce two challenges that any form of extrapolative reasoning must meet to be successful. In section 4, we show that analogical abduction is a promising account to deal with the epistemological problem of other consciousnesses. In section 5, we consider some limitations of this account and conclude that, although analogical abductive arguments cannot currently provide a definitive solution for the epistemological problem of other consciousness, they have the potential to do so and therefore constitute our best available option.
2. The epistemological problem of other consciousnesses and the two traditional forms of reasoning to approach it
The epistemological problem of other consciousnesses can be seen as an instance of a more general philosophical problem, the problem of extrapolation (Baetu Reference Baetu2024): How can one justifiably generalize from an epistemically privileged domain to a less epistemologically privileged domain (Guala Reference Guala2010; Steel Reference Steel2007; Thagard Reference Thagard1988)? Following the standard use in philosophy of science, we refer to the epistemologically privileged domain as the source domain(source) and to the domain of interest as the target domain (target).
Depending on the scope of the extrapolative argument for other consciousnesses, source and target can be identified in different ways. For the purposes of this article, source will normally refer to the domain of neurotypical adult humans, from which the science of consciousness gathers most of its knowledge and on which theories of consciousness are generally built and tested (Mudrik et al. Reference Mudrik, Mylopoulos, Negro and Schurger2023; Mudrik et al. Reference Mudrik, Boly, Dehaene, Fleming, Lamme, Seth and Melloni2025; Seth and Bayne Reference Seth and Bayne2022; Yaron et al. Reference Yaron, Melloni, Pitts and Mudrik2022), whereas target will normally refer to nonstandard systems in general.
Extrapolations in consciousness science would be fairly easy to justify if models of source-consciousness (i.e., human-consciousness) were built in a context-independent way, that is, if the claims made about consciousness were evidently true irrespective of the characteristics of the source population. For example, theories of consciousness could be formulated in terms of causal powers or capacities, which are by definition context-independent (Cartwright Reference Cartwright1994; Hiddleston Reference Hiddleston2005; Steel Reference Steel2007, chap. 5).Footnote 2 If this were the case, theories of consciousness could explain consciousness by pointing at universal laws and therefore by employing explanatory constructs that are not dependent on the particular domain of applicability (in the same way as gravitational laws are supposed to apply to apples as well as to distant planets). This would make theories of consciousness conform with the requirement of universality (Kanai and Fujisawa Reference Kanai and Fujisawa2024), rendering explanations in consciousness science closer to explanations based on universal laws, as in physics. For example, the integrated information theory (Albantakis et al. Reference Albantakis, Barbosa, Findlay, Grasso, Haun, Marshall and Mayner2023; Tononi et al. Reference Tononi, Boly, Massimini and Koch2016) aspires to provide such context-independent explanatory structure, given that it seeks to explain consciousness by relying on the notion of cause–effect powers of the physical world (but see Lau and Michel Reference Lau and Michel2019; Mediano et al. Reference Mediano, Rosas, Bor, Seth and Barrett2022; Merker, Williford, and Rudrauf Reference Merker, Williford and Rudrauf2021 for criticisms of its ability to do so). Nevertheless, the theory’s axioms are based on phenomenological explorations of human experience, so the foundation of the theory might still be context-dependent, despite the proclaimed aspiration (see Bayne Reference Bayne2018).
Independently of how specific context-independent explanations of consciousness are constructed, the more general point is that it is questionable whether explanations in the biological sciences should indeed follow the same explanatory practices used in physics (Craver Reference Craver2007, Reference Craver, Machamer and Silberstein2002), especially because many biological phenomena seem to be domain-dependent (e.g., an explanation of digestion in humans does not apply to cows, and theories of protein synthesis might not generalize to extraterrestrial life). It seems to be an open question, then, whether consciousness is the type of phenomenon that should be accounted for by universal generalizations or whether, instead, its explanation should be domain-dependent.
This article surveys some possibilities to attribute consciousness to other systems via extrapolative inferences, even if the explanation of consciousness indeed turns out to be context-sensitive and not universal. We focus on the two prima facie different and alternative strategies suggested in the philosophical literature for formulating extrapolative inferences: IBE-based reasoning (Pargetter Reference Pargetter1984) and analogical reasoning (Hyslop Reference Hyslop and Hyslop1995; Hyslop and Jackson Reference Hyslop and Jackson1972); for a general introduction, see Avramides (Reference Avramides2000).
Both strategies seem well suited for tackling the epistemological problem of other consciousnesses, because they build on ampliative inferences, in which the conclusion conveys more information than the premises. We briefly present them in the following two sections.
2.1. IBE-based reasoning
IBE-based reasoning exploits abductive inferences, namely, inferences drawn in virtue of the explanatory power of the inferred hypothesis (Lipton Reference Lipton2004; Psillos Reference Psillos2002). The standard example is to infer, from the observation of wet streets, that it might have rained last night, because this conjecture is the best explanation for the evidence.
This argumentative strategy can be applied to the epistemological problem of other consciousnesses by noticing that some publicly observable properties of a system of interest are best explained by the hypothesis that consciousness is required for their instantiation.Footnote 3 The argument (adapted from Psillos Reference Psillos2002, 614) can be formalized as follows:
-
P1. D is a collection of data about publicly observable properties of system S in target.
-
P2. The hypothesis H that S is conscious explains D (would, if true, explain D).
-
P3. No other hypothesis explains D as well as H does.
-
Therefore,
-
C. H is probably true (i.e., S probably is conscious).
2.2. Analogical reasoning
Although there is much debate on how to properly characterize analogical arguments (for a comprehensive discussion, see Bartha Reference Bartha and Edward2019, Reference Bartha2010), a general enough form of analogical reasoning can be captured in the following way: We are justified in inferring that two systems are similar along certain unobserved dimensions if they are also similar with respect to some observed dimensions, given prior knowledge that, in a given domain, the observed and unobserved dimensions of interest co-occur (Bartha Reference Bartha and Edward2019; Hesse Reference Hesse1965).
This reasoning can be applied to the case of other consciousnesses: Given that I know that certain brain structures and activity, and/or certain functions and behaviors, reliably and systematically correlate with certain conscious properties in us (i.e., neurotypical adult humans), I can infer that similar conscious properties will be present in a system with brain structure and dynamics, and/or functions and behaviors, analogous to ours.
The analogical argument for other consciousnesses can be formalized as follows:
-
P1. D is a collection of data showing that there is a systematic and reliable correlation between publicly observable properties and consciousness in the source domain (i.e., neurotypical adult humans).
-
P2. D* is a collection of data about publicly observable properties of system S in target.
-
P3. D* suggests that publicly observable properties of S are similar to those of source (i.e., neurotypical adult humans).
-
Therefore,
-
C. S probably is conscious.
3. How to extrapolate successfully
What does it take for an extrapolation to be successfully implemented? Following Steel (Reference Steel2007), we posit that any successful extrapolation must solve two problems: first, the extrapolator’s circle—how to say something informative about the phenomenon in target given only partial knowledge of the target system and without assuming the presence of the phenomenon in target—and second, the problem of difference, or how to justify inferences about the phenomenon in target given relevant dissimilarities between source and target (this pair of problems was originally introduced by LaFollette and Shanks Reference LaFollette and Shanks1996).
We first examine how IBE-based reasoning might deal with these challenges. This strategy is not directly threatened by the problem of difference because it does not explicitly rely on similarities between source and target. Moreover, it can solve the problem of difference by denying that differences between source and target are explanatorily relevant for consciousness. This requires our best explanation of source-consciousness to successfully discriminate between properties (and their dimensions) that are relevant for consciousness and properties (and their dimensions) that are irrelevant for consciousness. Arguably, this requirement is problematic given the current theoretical landscape, as it is questionable whether it holds for any of the presently available explanations or theories of consciousness. However, this is a flaw of current theories, not of the argumentative strategy itself, so we set it aside for now; let us assume that this problem can be solved by the IBE-based approach.
Even if so, we argue that this strategy fails to solve the problem of the extrapolator’s circle. To explain why this is the case, we should first clarify exactly which cog in the IBE-based argument for other consciousnesses links what we know about source to what we say about target.
This link is found in P2 in the aforementioned schema for IBE-based arguments: “The hypothesis H that S is conscious explains D (would, if true, explain D).” Here D refers to data about a system in target, more precisely about publicly observable properties of the system, but why are we justified in connecting such data to consciousness? In other words, why is H better than an alternative hypothesis (H*) that posits that those data can be explained by unconscious processes? If we want to select H over H* without appealing to similarities between source and target (because that would push the argument toward an argument from analogy), then we need to assume that a well-established explanation of consciousness based on knowledge gathered in source is applicable also to the purported connection between publicly observable properties and consciousness in target. But whether such an explanatory connection is justified in target is precisely what we need to establish and therefore cannot be assumed.
To clarify this point, let us take source to denote neurotypical adult humans and assume, for the sake of argument, that we have a well-established theory built and tested on members of source. This theory can provide the means to determine if consciousness is indeed the best explanation for D. But because the theory was developed and tested on members of source, it is prima facie a theory of human-consciousness (or of source-consciousness). The problem arises when we want to apply that theory to a nonstandard system that exhibits some interesting publicly observable properties and argue that the best explanation for those properties is consciousness, based on the theory we have. This is problematic because those properties are explanatorily linked to consciousness in the human case: Are we justified in considering the human-based theory as explanatorily powerful in the case of the nonstandard (possibly also nonhuman) system, or not? (For a similar point, see Block Reference Block2002; Dung Reference Dung2022; Usher et al. Reference Usher, Negro, Jacobson and Tsuchiya2023.) This is precisely the epistemological problem of other consciousnesses, and assuming that we are in fact justified in drawing an explanatory connection between publicly observable data and consciousness in target , as P2 in the IBE-based argument implies, amounts to circular reasoning.
Thus the IBE-based argument for other consciousnesses does not seem to have the resources, in itself, to avoid the extrapolator’s circle. Again, this is the problem of explaining why we can gain knowledge about certain properties of target given limited knowledge about target, without assuming that those properties occur in target to begin with.
Can analogical reasoning succeed where IBE-based reasoning fails? Analogical reasoning does not seem to be necessarily affected by the extrapolator’s circle, because consciousness in target is not assumed but rather projected; that is, rather than being inferred in virtue of an explanatory link that is assumed to be valid at the beginning of the investigation, it is instead inferred in virtue of some similarities between source and target. Footnote 4
However, analogical reasoning fails to solve the problem of difference (i.e., the problem of explaining why certain unobserved similarities between source and target should be present, given the relevant dissimilarities between the two domains). This is due to the inevitable presence of relevant differences between source and target: How much difference can we accept without considering source and target too distant for the analogy to hold? And what should our criteria for determining some threshold for answering this question be? According to Steel (Reference Steel2007, 78–79), “any adequate account of extrapolation in heterogeneous populations must explain how extrapolation can be possible even when such differences are present.”
This is a well-known problem in the social and life sciences. For example, the translational power of cancer research on animal studies to humans is limited (Mak, Evaniew, and Ghert Reference Mak, Evaniew and Ghert2014). Similarly, social policies and programs can fail when implemented in contexts that are partially different from the one in which the policy was previously (and successfully) implemented, as the case of the Tamil Nadu Integrated Nutrition Program shows (Cartwright and Hardie Reference Cartwright and Hardie2012; Marchionni and Reijula Reference Marchionni and Reijula2019).Footnote 5
When it comes to consciousness, the problem is, then, how can we be sure that the inevitable differences between neurotypical adult humans and nonstandard systems are not differences that make a difference?
Typically, defenders of the analogical approach to the epistemological problem of the other minds (e.g., Hyslop and Jackson Reference Hyslop and Jackson1972; for a discussion, see Godfrey-Smith Reference Godfrey-Smith, O’Rourke, Campbell and Slater2011) reply to this challenge by pointing out that the projectability of the property of interest is based on the fact that the property picks out a structural feature of reality, or, in other terms, a natural kind (i.e., a group of particulars bound together by how reality is, rather than by how observers think it is; Bird and Tobin Reference Bird, Tobin and Edward2008). If we drop a chicken’s egg and observe that it breaks, we do not necessarily need to drop a seagull’s egg, an ostrich’s egg, and so on, to infer that those eggs will most probably break if dropped. The egg’s fragility seems to be a property that depends on the egg’s material constitution, and the egg’s material constitution is a property reliably conserved across most, if not all, eggs. That is, natural kinds are supposed to be resistant enough to differences between domains and contexts so that properties of a member of the kind can be justifiably projected to other members of the kind.
This reply, based on the natural kind strategy, is also supposed to address another possible worry, namely, that analogical reasoning for other consciousnesses is ultimately based on a sample of one population and therefore cannot be informative. However, as Godfrey-Smith (Reference Godfrey-Smith, O’Rourke, Campbell and Slater2011, 39) puts it, in the case of inductive inferences referring to natural kinds, “one instance of an F would be enough, in principle, if you picked the right case and analyzed it well. Ronald Reagan is supposed to have said ‘once you’ve seen one redwood, you’ve seen them all.’”
The success of analogical reasoning to solve the problem of difference, and consequently the epistemological problem of other consciousness, thus seems to rest on whether the relationship between publicly observable properties and conscious properties is in fact a structural feature of reality, that is, if consciousness is indeed a natural kind.
Accordingly, to challenge the analogical inference, one could demonstrate that consciousness is not a natural kind. For example, it could be demonstrated that the concept “consciousness” does not pick out any single phenomenon in reality but rather a group of dissociable capacities and properties (Irvine Reference Irvine2012, Reference Irvine2017). However, most consciousness researchers implicitly operate under the assumption that consciousness is indeed a natural kind, as suggested by their attempts to uncover the neural basis of consciousness as a unitary phenomenon (e.g., Crick and Koch Reference Crick and Koch1990; Melloni et al. Reference Melloni, Mudrik, Pitts and Koch2021). Others embrace this view explicitly, and an active and ongoing research program has been leveraging this perspective (Bayne and Shea Reference Bayne and Shea2020; Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Mckilliam Reference Mckilliam2024; Shea Reference Shea2012; Shea and Bayne Reference Shea and Bayne2010).
Of course, this proclaimed consensus does not guarantee that consciousness is indeed a natural kind. To explain the core of the problem, let’s go back to P1 of the argument schema introduced earlier: “D is a collection of data showing that there is a systematic and reliable correlation between publicly observable properties and consciousness in the source domain (i.e., neurotypical adult humans).”
The specific criteria needed to ensure that the correlation of interest tracks a natural kind will vary depending on which theory of natural kinds one endorses (for discussion, see Boyd Reference Boyd2019; Khalidi Reference Khalidi2018). Yet, the minimal criterion for guaranteeing that the correlation can be validly projected is showing that it is not merely a spurious one; for example, this could be done by grounding the correlation in the presence of some mechanism that underlies the natural kind and showing that it generates similarity between members of the kind (but see Craver Reference Craver2009 for a discussion). Of course, this would require identifying this mechanism, which might not be straightforward in the case of consciousness (Bayne and Shea Reference Bayne and Shea2020; Shea Reference Shea2012). In any case, the link between publicly observable properties and consciousness should consistently and accurately reflect a structural feature of reality, not an observer-dependent artifact.
That is, the natural kind strategy should explain why the hypothesis of a direct, reliable connection between consciousness and publicly observable properties is better than other explanations. In other words, the hypothesis that those publicly observable properties track a natural kind (i.e., consciousness) must be preferred to the hypothesis that the correlation between consciousness and those publicly observable properties is spurious. To do this, one could argue that the “natural kind hypothesis” is more parsimonious, or coheres better with background knowledge, than the “spurious correlation hypothesis.” For example, following Sober (Reference Sober2000; see also Millikan Reference Millikan1999), we could claim that consciousness is a biological kind and therefore is projectable to systems similar to us in terms of shared evolutionary history. In this case, the hypothesis that the relationship between consciousness and publicly observable properties is conserved in target is more parsimonious than the hypothesis that analogous publicly observable properties are underwritten by conscious properties in one domain and unconscious properties in another domain. This is because the “consciousness hypothesis” requires only one character change (i.e., from creatures who do not have publicly observable properties correlated with consciousness to creatures who have such properties), while the “unconsciousness hypothesis” requires two character changes (i.e., from creatures who do not have those publicly observable properties to creatures who have those properties correlated with consciousness, on the one hand, and creatures who have those properties correlated with unconsciousness, on the other hand). This strategy thus builds on parsimony considerations to explain why the hypothesis that a target system is conscious is better than alternative hypotheses.
The problem with analogical reasoning is that appealing to explanatory considerations of this sort, based on parsimony or coherence with background knowledge, pushes the limits of analogical reasoning by including elements that typically figure in abductive arguments. That is, analogical reasoning on its own cannot solve the problem; it must be combined with another type of reasoning. Specifically, it must be combined with IBE-based arguments, where the best explanation is justified precisely because of theoretical virtues like parsimony and coherence with background knowledge (Douglas Reference Douglas2013; Longino Reference Longino1979; Lipton Reference Lipton, Hon and Sam2001; McMullin Reference McMullin, Curd and Psillos2008; Psillos Reference Psillos2007).
To summarize, IBE-based arguments cannot solve the extrapolator’s circle; analogical arguments can, but they need to ensure that the projected property is a natural kind property to address the problem of difference. And to ensure that we are projecting a genuine natural kind property, when we project consciousness from source to target, analogical arguments must resort to explanatory considerations generally used by IBE-based arguments, which make them partially abductive: Analogy does not seem to solve the problem of difference on its own.
Thus both the IBE-based argument and the argument from analogy, individually taken, struggle with the challenges for successful extrapolations. For IBE-based reasoning to be successful, it needs to consider similarities between source and target to solve the extrapolator’s circle, whereas analogical reasoning needs to include explanatory considerations to solve the problem of difference. Both are incomplete in this context.
To cope with this problem, we now further systematize the approach already taken by some scholars in the field of consciousness science (e.g., Barron and Klein Reference Barron and Klein2016; Birch Reference Birch2022), suggesting that analogical abduction might solve this conundrum. We provide a more systematic philosophical argument to justify this praxis, and we claim that analogical reasoning and IBE-based reasoning are in fact complementary and could be merged to deliver a stronger form of reasoning to deal successfully with the epistemological problem of other consciousnesses.
4. Analogical abduction
This is how Schurz (Reference Schurz2008, 217) defines analogical abduction:
Here one abduces a partially new concept and at the same time new laws which connect this concept with given (empirical) concepts, in order to explain the given law-like phenomenon. The concept is only partly new because it is analogical to familiar concepts, and this is the way in which this concept was discovered. So analogical abduction is driven by analogy.
The crucial point here is that abduction can confer justification on concepts posited and conjectured in the context of discovery, in virtue of their merits in explaining certain phenomena of interest. But the justification for positing such concepts is driven by analogical reasoning in the first place (Bartha Reference Bartha2010, Reference Bartha and Edward2019; Thagard Reference Thagard1988); namely, it is driven by the fact that the conjectured concepts are relevantly similar to some well-established concepts in our background knowledge.
We can formalize a general analogical abductive argument in the following way:
-
P1. D is a collection of data about F-properties in source. Footnote 6
-
P2. D* is a collection of data about Fʹ-properties in target.
-
P3. There are relevant similarities between D and D* (from P1 and P2).
-
Also,
-
P4. We have good models that explanatorily link F-properties to G-properties in source.
-
P5. The hypothesis H that Gʹ-properties (which are similar to G-properties) occur in target, which is formulated in virtue of P3, would explanatorily link Fʹ-properties to Gʹ-properties in target.
-
P6. H is better than any other hypothesis.
-
Therefore,
-
C. H is probably true (it is probably true that Gʹ-properties occur in target).
If we apply this general schema to the epistemological problem of other consciousnesses, we have the following:
-
P1. D is a collection of data about publicly observable properties in source.
-
P2. D* is a collection of data about publicly observable properties in target.
-
P3. There are relevant similarities between D and D* (from P1 and P2).
-
Also,
-
P4. We have good models that explanatorily link similar publicly observable properties to phenomenal properties in source.
-
P5. The hypothesis H that phenomenal properties are instantiated in target (which we justifiably formulate because of P3) would explanatorily link publicly observable properties and phenomenal properties in target, given that similar observable properties are explanatorily linked to phenomenal properties in source (from P3 and P4).Footnote 7
-
P6. H is better than any other hypothesis.
-
Therefore,
-
C. It is probably true that phenomenal properties are instantiated in target.
The concept of “consciousness-in-other-systems” (“target-consciousness”) is thus posited in virtue of the fact that we master, from the first-person perspective, the concept of “consciousness-in-us” (“source-consciousness”) and that we possess a scientifically informed model of an explanatory relationship between consciousness and some publicly observable properties in us. Given such a reasonably well-established model, once we detect similar publicly observable properties in target, it seems that the best hypothesis, in terms of parsimony and coherence with background knowledge,Footnote 8 is that those properties are also related to consciousness in target. This model of the relationship between consciousness and publicly observable properties in source does not need to be (or be derived from) a full-fledged theory of consciousness but can also be derived from a more general framework that captures only some features of consciousness and some of the publicly observable properties related to it. What is relevant for the argument to remain abductive is that those features be explanatorily connected with publicly observable properties and not purely correlated with them. In this sense, analogical abductive arguments can complement both “theory-heavy” (i.e., based on full-fledged and specific theories of consciousness; see Seth and Bayne Reference Seth and Bayne2022) and “theory-light” (Birch Reference Birch2022) or “test-based” approaches (Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024).
Thus we have implemented Schurz’s definition of analogical abduction on the epistemological problem of other consciousnesses: target-consciousness is the partially new concept, abduced to account for publicly observable properties in target, but it is only partially new because it is analogical to the familiar concept of consciousness-in-us, or source-consciousness, and in virtue of this analogy, the concept of target-consciousness has been discovered. In the preceding argument, P1, P2, and P3 speak directly to analogical considerations, whereas P4, P5, and P6 speak to explanatory ones.
A final clarification pertains to the nature of the explanatory link mentioned in P4. This discussion interacts with the issue of what counts as relevant observable properties for consciousness. A first interpretation of the nature of this link is that F-properties are causally explained by G-properties. In the case of consciousness, F-properties could correspond to functional and/or behavioral properties, while G-properties would be conscious properties. Under this interpretation, the primary evidence for consciousness is given by functional and/or behavioral properties, and, courtesy of analogical abductive arguments, consciousness in other systems would be posited to causally explain functional and behavioral properties observed in nonstandard systems. For example, a theory like the global workspace theory (GNWT) (Dehaene and Naccache Reference Dehaene and Naccache2001; Mashour et al. Reference Mashour, Roelfsema, Changeux and Dehaene2020) posits that certain functions, such as the ability to integrate and maintain information over time, can be performed only if information is broadcast into a global workspace that sustains and shares that information with many consumer systems. Accessibility to such global workspace just is consciousness. Those functional abilities can be evidenced by mental chaining of operations (Sackur and Dehaene Reference Sackur and Dehaene2009) or global violation detection (Bekinschtein et al. Reference Bekinschtein, Dehaene, Rohaut, Tadel, Cohen and Naccache2009), just to cite a few, because these functions are causally connected to consciousness and cannot be performed unconsciously. Thus, if a target system is able to perform tasks that require such abilities (e.g., it detects a global variation in a sequence of auditory stimuli), then GNWT proponents would be entitled to formulate analogical abductive arguments based on the fact that those behaviors are causally explained by accessibility of information to a global workspace, that is, by consciousness. Similarly, Birch’s theory-light approach assumes that the identification of some cognitive functions that are facilitated by consciousness suffices to extrapolate from the human case to nonhuman animals. Even without endorsing a specific theory, this approach can still take advantage of analogical abductive arguments by positing that the nature of the explanatory link between consciousness and a cluster of publicly observable properties (e.g., unlimited associative learning; Birch, Ginsburg, and Jablonka Reference Birch, Ginsburg and Jablonka2020) must be causal: Consciousness is assumed to be the cause of certain functions and behaviors in us; if similar functions and behaviors are observed in certain target systems, one could formulate an analogical abductive argument to justify the attribution of consciousness to those systems. Thus, independently of whether one operates under the tenets of a specific theory, if functional/behavioral properties are taken to be the primary evidence for consciousness, then the nature of the explanatory link in the analogical abductive argument will likely be causal.
But this causal explanatory link does not hold if implementation or mechanistic properties (e.g., neurobiological properties) are taken to be primary evidence for consciousness, because implementation properties are not causally explained by conscious properties. In this case, we can take advantage of a type of abduction that Harman (Reference Harman1986, 68) and Lipton (Reference Lipton2004) have called “inference from the best explanation.” In this instance, we do not observe what could be explained by the hypothesis; rather, we observe what could explain a fact, or a property, that we are licensed to posit in virtue of that observation. With Lipton’s example, from the observation that it is cold outside, I am justified to infer that the car will not start. The observation (i.e., “it is cold outside”) is not explained by the hypothesis that the car will not start; rather, as Lipton puts it, given relevant background knowledge, I am justified to infer that the car will not start from the observation that it is cold outside because if it were true that the car will not start, the cold would be the best explanation for it (Lipton Reference Lipton2004, 64). In the inference from the best explanation, the observation is the explanation,Footnote 9 not the conjecture. Thus, in the case of other consciousnesses, inference from the best explanation could license the conclusion that publicly observed properties explain the consciousness of the target system because if it were true that the target system is conscious, those properties would be the best explanation of that fact. Therefore, if we take the primary evidence for consciousness to be neurobiological/mechanistic properties, we can posit conscious properties (Gʹ-properties) in target because, given what we know about the neural underpinnings of consciousness in us (i.e., in source), if it were true that the system of interest in target is conscious, the observed neurobiological/mechanistic properties in target (Fʹ-properties) would be the best explanation of that fact.
For example, the recurrent processing theory (RPT) (Lamme Reference Lamme2006; Lamme and Roelfsema Reference Lamme and Roelfsema2000) maintains that consciousness corresponds to the implementation of information processing on local feedback loops (involving synaptic plasticity) in sensory areas of the brain. If a target system displayed information processing implemented through feedback loops, RPT proponents could formulate an analogical abductive argument to justify the attribution of consciousness to the target system. The explanatory link in their P4 would be constitutive or mechanistic, insofar as the explanatory model on which they rely (i.e., RPT) is based on a constitutive relationship between consciousness and the explanatorily relevant publicly observable property (i.e., feedback loops). Thus, if mechanistic/implementational properties are taken to be the primary evidence for consciousness, then the nature of the explanatory link between consciousness and publicly observable properties will likely be constitutive or mechanistic (Craver Reference Craver2007), rather than causal (for discussion, see Prasetya Reference Prasetya2021).
Thus different types of explanatory links between phenomenal properties and publicly observable properties can be given, depending on what researchers consider to be the relevant evidence for consciousness. Analogical abductive arguments will accordingly take different specific forms.Footnote 10
The analogical abductive approach is thus highly promising and can be easily applied to existing theories and accounts of consciousness. Yet it is not devoid of limitations, which we address in the next section, where we critically examine this strategy and its potential.
5. The prospects and limitations of analogical abduction
Despite its virtues, analogical abduction is not a silver bullet and faces several problems. As a starting point, we focus on the aforementioned challenges for successful extrapolations, which analogical abduction manages to meet, then continue to more contentious issues. First, we consider the extrapolator’s circle, namely, the problem of explaining why we are justified in believing that a target system is conscious without already assuming that the best explanation for source-consciousness works in target. As opposed to the IBE-based argument, analogical abduction avoids this problem because the link between observable properties and consciousness in target is not simply assumed; rather, it is justified in virtue of the similarity between properties in target and certain properties in source (thus, this strategy goes beyond the IBE-based one by relying on analogical reasoning). Again, analogical considerations drive the abduction process, and that is why the target system is not considered conscious simply in virtue of the assumption that our best explanation for source-consciousness works for target too. Rather, the hypothesis that the target system might be conscious is grounded in the similarity between target and source. The key point is that this similarity justifies the formulation of the hypothesis that the target system might be conscious—a hypothesis that could not be justifiably formulated without these analogical considerations.
Second is the problem of difference: How can we determine whether a system is similar enough to us to justify extrapolations about its consciousness? A possible solution can be based on Gentner’s (Reference Gentner1983) structure-mapping theory, which influenced Guala’s (Reference Guala2005, 180) thesis that extrapolation is possible only when the source and target systems “belong to structurally similar mechanisms,” as well as Steel’s (Reference Steel2007, Reference Steel2010) notion of comparative process tracing (see also Guala Reference Guala2010). According to this approach, similarity is structurally grounded: Extrapolations are justified insofar as the mechanistic processes or the properties in target have the same structure as the relevant processes or properties in source. Translated to consciousness science, this would mean that extrapolations about consciousness are justified only when there is a structure-preserving mapping between the consciousness-related properties in source and the properties observed in target. This means that if we had a compelling model of source-consciousness that explanatorily relates consciousness to a (causal or constitutive) structure of publicly observable properties (thereby going beyond the analogical strategy and relying on aspects of the IBE-based one), we could project consciousness to any system that exhibits the same structure of publicly observable properties, independently of all the other possible differences.
Analogical abductions thus inherit the inferential relevance of explanatory considerations from the IBE-based argument and the importance of structural similarity for the projectability of the property of interest from the argument from analogy.
Admittedly, however, the analogical abductive strategy is limited, because structural similarity is a property that comes in degrees, and it is unclear what level of similarity suffices to justify extrapolations. To make the extrapolative leap, we should define a “similarity threshold” above which the inference is justified. Some have suggested that this threshold might be based on a cluster of similar properties between source and target (Birch Reference Birch2022), but this still requires a definition of the minimal size of the cluster that justifies the extrapolative leap (Shevlin Reference Shevlin2021). Moreover, although we consider this strategy to be promising for extrapolating consciousness to biological creatures, it might be more difficult to apply to artificial systems. This is mainly because of three reasons: In the case of artificial consciousness, (1) we cannot rely on evolutionary similarities, (2) metaphysical debates concerning the substrate neutrality of consciousness are more prominent (Seth Reference Seth2024; Shiller Reference Shiller2024), and (3) our antecedent knowledge that the cluster of markers associated with consciousness in humans has been deliberately designed to be displayed by an artificial entity might undermine the view that this cluster tracks consciousness in this domain.
A third problem for the analogical abduction strategy is that it presently has a limited “epistemic force.” In the philosophical literature (Calzavarini and Cevolani Reference Calzavarini and Cevolani2022; Schurz Reference Schurz2008), abductive arguments are considered to be strong if they justify the acceptance of a hypothesis as true, whereas they are considered to be weak if they only select hypotheses as interesting conjectures that require further empirical testing.
Given the current disagreement in consciousness science and the lack of a well-established, and specific, theory of consciousness (Francken et al. Reference Francken, Beerendonk, Molenaar, Fahrenfort, Kiverstein, Seth and van Gaal2022; Yaron et al. Reference Yaron, Melloni, Pitts and Mudrik2022), analogical abductive arguments for nonstandard systems seem to be weak (Baetu Reference Baetu2024). If so, they can be used only to justify formulating the hypothesis that a certain target system is conscious, rather than accepting the hypothesis that it actually is conscious. Because we already relied on the relevant evidence to build the analogical abductive argument and formulate the hypothesis that the system is conscious, we seem to lack the methodological basis for passing from hypothesis formulation to hypothesis acceptance. Thus, until a better understanding of consciousness is available, this approach can offer only a partial solution for the epistemological problem of other consciousnesses.
A possible way to establish such a methodological basis might lie in the iterative natural kind approach (Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Mckilliam Reference Mckilliam2024), because its iterative nature can allow a gradual increase in our confidence in whether the target system is conscious (see also Baetu Reference Baetu2024). However, it is not clear whether this strategy can deliver strong analogical abductive inferences, rather than just less weak inferences.
In the meantime, analogical abductive arguments could still be helpful in delivering weakly justified extrapolations. In several decision-making contexts, especially those related to substantial ethical and societal implications, it is reasonable to lower the evidential bar and enact evidence-based policies that can preempt harm, even if the evidence is only partial. As Steel (cited in Birch Reference Birch2017, 4) puts it, “Policy [should] not be susceptible to paralysis by scientific uncertainty” (see also Birch Reference Birch2023; Johnson Reference Johnson and Syd2016). Analogical abductive arguments can serve as the tools to navigate the uncertainty about consciousness in nonstandard systems and to provide attributions of consciousness that, albeit weak, can still be sufficient for informed decision-making.
To summarize, analogical abductive arguments are currently facing a speed/accuracy trade-off: They can deliver strongly justified and accurate conclusions either via an established theory of consciousness or by using the iterative natural kind strategy; both options, however, require a long process that will not likely be completed in the near future. However, consciousness science is already needed to inform decision-making and regulations about various nonstandard systems. In the short run, analogical abductive arguments can provide some degree of justification for attributions of consciousness to these systems, but because these attributions can be only weakly justified, the risk of inaccurate attributions is high.
6. Conclusion
In this article, we maintain that arguments based on analogical abduction capture and systematize the reasoning strategy employed by many consciousness researchers interested in attributing consciousness to nonstandard systems and that this strategy is indeed the most promising for extrapolating the presence of consciousness in such systems. Analogical abductive arguments do better than standard analogical arguments and IBE-based arguments with respect to the two challenges that any extrapolative inference must satisfy, namely, the extrapolator’s circle and the problem of difference. However, further research is needed to allow analogical abductive arguments to overcome some of the limitations they are still facing. For example, it is not clear what degree of similarity along consciousness-relevant dimensions is sufficient to justify strong abductions. Moreover, it seems possible that different degrees of similarity will be required, depending on the type of conclusion in which we are interested: Inferring that a system is conscious (rather than not) might require a lower degree of similarity than a conclusion about what the system is conscious of.
Other problems, although not directly related to the structure of analogical abductive arguments, might limit their applicability: The current status of consciousness science still does not provide a clear view of what type of publicly observable properties are relevant to consciousness specifically, and consensus on the best theory of source-consciousness is far from being established.
Thus, the applicability of analogical abductions is currently limited. However, we maintain that analogical abductions still provide the best available justificatory option for inferring consciousness in nonstandard systems and can be useful in deriving weakly justified conclusions that can inform practical decision-making. We have argued that analogical abductive arguments provide the blueprint for justified attributions of consciousness to nonstandard systems. This is important because a clarification of the argumentative structure underlying our ascriptions of consciousness can help consciousness scholars to assess the soundness of existing arguments for attributing consciousness to nonstandard systems and to formulate stronger arguments of this type.
Acknowledgements
N.N. is funded by an Azrieli International Postdoctoral Fellowship. L.M. is CIFAR Tanenbaum Fellow in the Brain, Mind, and Consciousness program. We thank all members of the Mudrik Lab at Tel Aviv University for their comments on previous drafts of this article, in particular Gennadiy Belonosov, Rony Hirschhorn, Maor Schreiber, and Amir Tal. N.N. thanks Andy McKilliam for reading and providing comments on an early version of this work. A previous version of this article was presented at ASSC27, where we received very helpful comments and feedback from the audience.