Hostname: page-component-54dcc4c588-mz6gc Total loading time: 0 Render date: 2025-09-28T14:23:39.128Z Has data issue: false hasContentIssue false

Extrapolating Other Consciousnesses: The Prospects and Limits of Analogical Abduction

Published online by Cambridge University Press:  27 August 2025

Niccolò Negro*
Affiliation:
School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
Liad Mudrik
Affiliation:
School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel Brain, Mind, and Consciousness Program, Canadian Institute for Advanced Research, Toronto, ON, Canada
*
Corresponding author: Niccolò Negro; Email: niccolonegro@tauex.tau.ac.il
Rights & Permissions [Opens in a new window]

Abstract

Advances in animal sentience research, neural organoids, and artificial intelligence reinforce the relevance of justifying attributions of consciousness to nonstandard systems. Clarifying the argumentative structure behind these attributions is important for evaluating their validity. This article addresses this issue, concluding that analogical abduction—a form of reasoning combining analogical and abductive elements—is the strongest method for extrapolating consciousness from humans to nonstandard systems. We argue that the argument from analogy and inference to the best explanation, individually taken, do not meet the criteria for successful extrapolations, whereas analogical abduction offers a promising approach despite limitations in current consciousness science.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

Recent advances in consciousness research, animal sentience research, neural organoids, and artificial intelligence (AI) have made the epistemological problem of other conscious minds—what, if at all, justifies the attribution of phenomenal properties to other entities—more relevant than ever. Because finding consciousness in such systems is likely to have significant ethical and societal implications (Birch Reference Birch2024; Levy Reference Levy2024; Shepherd Reference Shepherd2018; Siewert Reference Siewert1998), this problem has become especially pressing. Accordingly, tests for consciousness are repeatedly being discussed and suggested (Andrews Reference Andrews2024; Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Dung Reference Dung2022; Kazazian, Edlow, and Owen Reference Kazazian, Edlow and Owen2024; Negro and Mudrik Reference Negro and Mudrik2025; Schneider Reference Schneider2019) in an attempt to better understand the distribution of consciousness in nontrivial cases, both within the human population and outside of it. These populations include, among others, AI systems, nonhuman animals, neural organoids, infants, and fetuses (see table 1 for referrals to relevant literature on each population). Here we refer to these populations as nonstandard systems and frame the epistemological problem of other conscious minds in terms of other consciousnesses, asking how the current science of consciousness justifies generalizations about consciousness in these nonstandard systems.Footnote 1

Table 1. Key papers on the possibility of consciousness in nonstandard systems

Population Key papers
AI systems Butlin et al. (Reference Butlin, Long, Elmoznino, Bengio, Birch, Constant and Deane2023), Chalmers (Reference Chalmers2023), Dehaene, Lau, and Kouider (Reference Dehaene, Lau and Kouider2017), Dung (Reference Dung2023), Elamrani and Yampolskiy (Reference Elamrani and Yampolskiy2019), Hildt (Reference Hildt2022), Schneider (Reference Schneider2019), Sills et al. (Reference Sills, Carter, Hohwy, van Boxtel, Lamme, Block, Koch and Tsuchiya2018)
Nonhuman animals Andrews (Reference Andrews2024), Barron and Klein (Reference Barron and Klein2016), Birch (Reference Birch2022), Birch, Schnell, and Clayton (Reference Birch, Schnell and Clayton2020), Carruthers (Reference Carruthers2019), Dung (Reference Dung2022), Halina, Harrison, and Klein (Reference Halina, Harrison and Klein2022), Tye (Reference Tye2017), Veit (Reference Veit2022)
Neural organoids Bayne, Seth, and Massimini (Reference Bayne, Seth and Massimini2020), Birch and Browning (Reference Birch and Browning2021), Croxford and Bayne (Reference Croxford and Bayne2024), Hameroff and Muotri (Reference Hameroff and Muotri2020), Jeziorski et al. (Reference Jeziorski, Brandt, Evans, Campana, Kalichman, Thompson, Goldstein, Koch and Muotri2023), Lavazza and Massimini (Reference Lavazza and Massimini2018), Owen et al. (Reference Owen, Huang, Duclos, Lavazza, Grasso and Hudetz2023)
Infants Dehaene-Lambertz (Reference Dehaene-Lambertz2024), Passos-Ferreira (Reference Passos-Ferreira2023, Reference Passos-Ferreira2024)
Fetuses Bayne et al. (Reference Bayne, Frohlich, Cusack, Moser and Naci2023), Ciaunica, Safron, and Delafield-Butt (Reference Ciaunica, Safron and Delafield-Butt2021), Frohlich et al. (Reference Frohlich, Bayne, Crone, DallaVecchia, Kirkeby-Hinrup, Mediano and Moser2023), Moser et al. (Reference Moser, Schleger, Weiss, Sippel, Semeia and Preissl2021)

Research on consciousness tests focuses mostly on the types of data (e.g., markers; Andrews Reference Andrews2024) that can serve as evidence for attributing consciousness to various target systems. Here we address instead the complementary issue of defining the reasoning that underlies justified attributions of consciousness to different target systems, independently of the type of evidence one decides to exploit.

Thus the driving question of this article targets the logical structure of the reasoning we employ to address the epistemological problem of other consciousnesses: What is the strongest inferential machinery we could use to justify the attribution of conscious properties to nonstandard systems? Agreement on the logical basis of our attribution practices is needed to clarify the argumentative structure that consciousness researchers ought to employ when concluding that a system has, or does not have, phenomenal properties. This is important both for assessing existing arguments for consciousness in nonstandard systems and for formulating future arguments of this sort.

Traditionally discussed within the more general problem of other minds, the epistemological problem of other consciousnesses has been approached through two different forms of reasoning: analogical reasoning and reasoning from the inference to the best explanation (IBE-based reasoning, for short). These are further rooted in two different inferential schemata: inductive inference for analogical reasoning and abductive inference for IBE-based reasoning. In the philosophical literature, these types of reasonings have often been presented as prima facie competing and incompatible. For example, Hyslop (Reference Hyslop and Hyslop1995) champions analogical reasoning while exhibiting the flaws of IBE-based reasoning, whereas Pargetter (Reference Pargetter1984) does the opposite.

This attitude is mirrored partly in the current consciousness science literature (for a related discussion focused on the science of animal consciousness, see Heyes Reference Heyes, Weiskrantz and Davies2008): On the one hand, scholars discussing and developing different consciousness tests (e.g., the command-following test; for discussion, see Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Owen et al. Reference Owen, Coleman, Boly, Davis, Laureys and Pickard2006) extrapolate consciousness via analogical reasoning; on the other hand, Tye (Reference Tye2017) suggests an inferential strategy that can be seen as similar to the IBE-based reasoning, while Chalmers (Reference Chalmers1996) and Passos-Ferreira (Reference Passos-Ferreira2023) explicitly adopt it.

However, as Melnyk (Reference Melnyk1994) has suggested, these two strategies are not logically incompatible, and indeed, in the current neuroscience of consciousness, many attributions of consciousness seem to incorporate aspects of both analogical reasoning and IBE-based reasoning (e.g., Barron and Klein Reference Barron and Klein2016). Birch (Reference Birch2022, 134) puts it explicitly: “What we should do … is build up a list of the behavioural, functional and anatomical similarities between humans and non-human animals, and use arguments from analogy and inferences to the best explanation to settle disputes about consciousness” (emphasis added).

Here we further develop this approach and provide a philosophical backbone to justify it, suggesting that the conjunction of analogical reasoning and IBE-based reasoning is the most promising approach when trying to determine which systems and organisms are conscious. We propose that the argument from analogy and the IBE-based argument are compatible and complementary and that they can be fruitfully combined to deal with the epistemological problem of other consciousnesses. We do so by introducing analogical abductive arguments and by showing that they can be used to overcome the problems that afflict analogical reasoning and IBE-based reasoning. We accordingly aim to provide a general structure for the “inferential machinery” (i.e., argument) that can be used to address the epistemological problem of other consciousnesses, independently of the specific fuel (i.e., test or evidence) one can put in that machine.

Hence our project has both descriptive and normative components. The descriptive goal of the project is to substantiate analogical abduction as a way for capturing and systematizing a type of inferential practice that relies on both analogical and IBE-based strategies for extrapolating consciousness. We do so by introducing a novel argument schema that incorporates elements of both strategies. The normative aspect of our analysis adds a further layer: We argue that analogical abduction is the most compelling inferential strategy for dealing with the epistemological problem of other consciousnesses. Accordingly, we suggest that consciousness science would benefit from adopting this form of reasoning to systematically build and assess arguments for the attribution of consciousness to nonstandard systems.

Moreover, we take the epistemological problem of other consciousnesses to be concerned primarily with the distribution of consciousness problem (i.e., is this system conscious or not?), rather than with the quality of consciousness problem (i.e., how is the system conscious/what is the system conscious of?) (Andrews Reference Andrews2024), so we will frame our discussion to address the distribution question. However, analogical abductive arguments can be leveraged to address the quality question as well.

In section 2, we present the two traditional forms of reasoning identified in the philosophical literature to approach the epistemological problem of other consciousnesses, namely, the argument from analogy and the IBE-based argument. In section 3, we introduce two challenges that any form of extrapolative reasoning must meet to be successful. In section 4, we show that analogical abduction is a promising account to deal with the epistemological problem of other consciousnesses. In section 5, we consider some limitations of this account and conclude that, although analogical abductive arguments cannot currently provide a definitive solution for the epistemological problem of other consciousness, they have the potential to do so and therefore constitute our best available option.

2. The epistemological problem of other consciousnesses and the two traditional forms of reasoning to approach it

The epistemological problem of other consciousnesses can be seen as an instance of a more general philosophical problem, the problem of extrapolation (Baetu Reference Baetu2024): How can one justifiably generalize from an epistemically privileged domain to a less epistemologically privileged domain (Guala Reference Guala2010; Steel Reference Steel2007; Thagard Reference Thagard1988)? Following the standard use in philosophy of science, we refer to the epistemologically privileged domain as the source domain(source) and to the domain of interest as the target domain (target).

Depending on the scope of the extrapolative argument for other consciousnesses, source and target can be identified in different ways. For the purposes of this article, source will normally refer to the domain of neurotypical adult humans, from which the science of consciousness gathers most of its knowledge and on which theories of consciousness are generally built and tested (Mudrik et al. Reference Mudrik, Mylopoulos, Negro and Schurger2023; Mudrik et al. Reference Mudrik, Boly, Dehaene, Fleming, Lamme, Seth and Melloni2025; Seth and Bayne Reference Seth and Bayne2022; Yaron et al. Reference Yaron, Melloni, Pitts and Mudrik2022), whereas target will normally refer to nonstandard systems in general.

Extrapolations in consciousness science would be fairly easy to justify if models of source-consciousness (i.e., human-consciousness) were built in a context-independent way, that is, if the claims made about consciousness were evidently true irrespective of the characteristics of the source population. For example, theories of consciousness could be formulated in terms of causal powers or capacities, which are by definition context-independent (Cartwright Reference Cartwright1994; Hiddleston Reference Hiddleston2005; Steel Reference Steel2007, chap. 5).Footnote 2 If this were the case, theories of consciousness could explain consciousness by pointing at universal laws and therefore by employing explanatory constructs that are not dependent on the particular domain of applicability (in the same way as gravitational laws are supposed to apply to apples as well as to distant planets). This would make theories of consciousness conform with the requirement of universality (Kanai and Fujisawa Reference Kanai and Fujisawa2024), rendering explanations in consciousness science closer to explanations based on universal laws, as in physics. For example, the integrated information theory (Albantakis et al. Reference Albantakis, Barbosa, Findlay, Grasso, Haun, Marshall and Mayner2023; Tononi et al. Reference Tononi, Boly, Massimini and Koch2016) aspires to provide such context-independent explanatory structure, given that it seeks to explain consciousness by relying on the notion of cause–effect powers of the physical world (but see Lau and Michel Reference Lau and Michel2019; Mediano et al. Reference Mediano, Rosas, Bor, Seth and Barrett2022; Merker, Williford, and Rudrauf Reference Merker, Williford and Rudrauf2021 for criticisms of its ability to do so). Nevertheless, the theory’s axioms are based on phenomenological explorations of human experience, so the foundation of the theory might still be context-dependent, despite the proclaimed aspiration (see Bayne Reference Bayne2018).

Independently of how specific context-independent explanations of consciousness are constructed, the more general point is that it is questionable whether explanations in the biological sciences should indeed follow the same explanatory practices used in physics (Craver Reference Craver2007, Reference Craver, Machamer and Silberstein2002), especially because many biological phenomena seem to be domain-dependent (e.g., an explanation of digestion in humans does not apply to cows, and theories of protein synthesis might not generalize to extraterrestrial life). It seems to be an open question, then, whether consciousness is the type of phenomenon that should be accounted for by universal generalizations or whether, instead, its explanation should be domain-dependent.

This article surveys some possibilities to attribute consciousness to other systems via extrapolative inferences, even if the explanation of consciousness indeed turns out to be context-sensitive and not universal. We focus on the two prima facie different and alternative strategies suggested in the philosophical literature for formulating extrapolative inferences: IBE-based reasoning (Pargetter Reference Pargetter1984) and analogical reasoning (Hyslop Reference Hyslop and Hyslop1995; Hyslop and Jackson Reference Hyslop and Jackson1972); for a general introduction, see Avramides (Reference Avramides2000).

Both strategies seem well suited for tackling the epistemological problem of other consciousnesses, because they build on ampliative inferences, in which the conclusion conveys more information than the premises. We briefly present them in the following two sections.

2.1. IBE-based reasoning

IBE-based reasoning exploits abductive inferences, namely, inferences drawn in virtue of the explanatory power of the inferred hypothesis (Lipton Reference Lipton2004; Psillos Reference Psillos2002). The standard example is to infer, from the observation of wet streets, that it might have rained last night, because this conjecture is the best explanation for the evidence.

This argumentative strategy can be applied to the epistemological problem of other consciousnesses by noticing that some publicly observable properties of a system of interest are best explained by the hypothesis that consciousness is required for their instantiation.Footnote 3 The argument (adapted from Psillos Reference Psillos2002, 614) can be formalized as follows:

  1. P1. D is a collection of data about publicly observable properties of system S in target.

  2. P2. The hypothesis H that S is conscious explains D (would, if true, explain D).

  3. P3. No other hypothesis explains D as well as H does.

  4. Therefore,

  5. C. H is probably true (i.e., S probably is conscious).

2.2. Analogical reasoning

Although there is much debate on how to properly characterize analogical arguments (for a comprehensive discussion, see Bartha Reference Bartha and Edward2019, Reference Bartha2010), a general enough form of analogical reasoning can be captured in the following way: We are justified in inferring that two systems are similar along certain unobserved dimensions if they are also similar with respect to some observed dimensions, given prior knowledge that, in a given domain, the observed and unobserved dimensions of interest co-occur (Bartha Reference Bartha and Edward2019; Hesse Reference Hesse1965).

This reasoning can be applied to the case of other consciousnesses: Given that I know that certain brain structures and activity, and/or certain functions and behaviors, reliably and systematically correlate with certain conscious properties in us (i.e., neurotypical adult humans), I can infer that similar conscious properties will be present in a system with brain structure and dynamics, and/or functions and behaviors, analogous to ours.

The analogical argument for other consciousnesses can be formalized as follows:

  1. P1. D is a collection of data showing that there is a systematic and reliable correlation between publicly observable properties and consciousness in the source domain (i.e., neurotypical adult humans).

  2. P2. D* is a collection of data about publicly observable properties of system S in target.

  3. P3. D* suggests that publicly observable properties of S are similar to those of source (i.e., neurotypical adult humans).

  4. Therefore,

  5. C. S probably is conscious.

3. How to extrapolate successfully

What does it take for an extrapolation to be successfully implemented? Following Steel (Reference Steel2007), we posit that any successful extrapolation must solve two problems: first, the extrapolator’s circle—how to say something informative about the phenomenon in target given only partial knowledge of the target system and without assuming the presence of the phenomenon in target—and second, the problem of difference, or how to justify inferences about the phenomenon in target given relevant dissimilarities between source and target (this pair of problems was originally introduced by LaFollette and Shanks Reference LaFollette and Shanks1996).

We first examine how IBE-based reasoning might deal with these challenges. This strategy is not directly threatened by the problem of difference because it does not explicitly rely on similarities between source and target. Moreover, it can solve the problem of difference by denying that differences between source and target are explanatorily relevant for consciousness. This requires our best explanation of source-consciousness to successfully discriminate between properties (and their dimensions) that are relevant for consciousness and properties (and their dimensions) that are irrelevant for consciousness. Arguably, this requirement is problematic given the current theoretical landscape, as it is questionable whether it holds for any of the presently available explanations or theories of consciousness. However, this is a flaw of current theories, not of the argumentative strategy itself, so we set it aside for now; let us assume that this problem can be solved by the IBE-based approach.

Even if so, we argue that this strategy fails to solve the problem of the extrapolator’s circle. To explain why this is the case, we should first clarify exactly which cog in the IBE-based argument for other consciousnesses links what we know about source to what we say about target.

This link is found in P2 in the aforementioned schema for IBE-based arguments: “The hypothesis H that S is conscious explains D (would, if true, explain D).” Here D refers to data about a system in target, more precisely about publicly observable properties of the system, but why are we justified in connecting such data to consciousness? In other words, why is H better than an alternative hypothesis (H*) that posits that those data can be explained by unconscious processes? If we want to select H over H* without appealing to similarities between source and target (because that would push the argument toward an argument from analogy), then we need to assume that a well-established explanation of consciousness based on knowledge gathered in source is applicable also to the purported connection between publicly observable properties and consciousness in target. But whether such an explanatory connection is justified in target is precisely what we need to establish and therefore cannot be assumed.

To clarify this point, let us take source to denote neurotypical adult humans and assume, for the sake of argument, that we have a well-established theory built and tested on members of source. This theory can provide the means to determine if consciousness is indeed the best explanation for D. But because the theory was developed and tested on members of source, it is prima facie a theory of human-consciousness (or of source-consciousness). The problem arises when we want to apply that theory to a nonstandard system that exhibits some interesting publicly observable properties and argue that the best explanation for those properties is consciousness, based on the theory we have. This is problematic because those properties are explanatorily linked to consciousness in the human case: Are we justified in considering the human-based theory as explanatorily powerful in the case of the nonstandard (possibly also nonhuman) system, or not? (For a similar point, see Block Reference Block2002; Dung Reference Dung2022; Usher et al. Reference Usher, Negro, Jacobson and Tsuchiya2023.) This is precisely the epistemological problem of other consciousnesses, and assuming that we are in fact justified in drawing an explanatory connection between publicly observable data and consciousness in target , as P2 in the IBE-based argument implies, amounts to circular reasoning.

Thus the IBE-based argument for other consciousnesses does not seem to have the resources, in itself, to avoid the extrapolator’s circle. Again, this is the problem of explaining why we can gain knowledge about certain properties of target given limited knowledge about target, without assuming that those properties occur in target to begin with.

Can analogical reasoning succeed where IBE-based reasoning fails? Analogical reasoning does not seem to be necessarily affected by the extrapolator’s circle, because consciousness in target is not assumed but rather projected; that is, rather than being inferred in virtue of an explanatory link that is assumed to be valid at the beginning of the investigation, it is instead inferred in virtue of some similarities between source and target. Footnote 4

However, analogical reasoning fails to solve the problem of difference (i.e., the problem of explaining why certain unobserved similarities between source and target should be present, given the relevant dissimilarities between the two domains). This is due to the inevitable presence of relevant differences between source and target: How much difference can we accept without considering source and target too distant for the analogy to hold? And what should our criteria for determining some threshold for answering this question be? According to Steel (Reference Steel2007, 78–79), “any adequate account of extrapolation in heterogeneous populations must explain how extrapolation can be possible even when such differences are present.”

This is a well-known problem in the social and life sciences. For example, the translational power of cancer research on animal studies to humans is limited (Mak, Evaniew, and Ghert Reference Mak, Evaniew and Ghert2014). Similarly, social policies and programs can fail when implemented in contexts that are partially different from the one in which the policy was previously (and successfully) implemented, as the case of the Tamil Nadu Integrated Nutrition Program shows (Cartwright and Hardie Reference Cartwright and Hardie2012; Marchionni and Reijula Reference Marchionni and Reijula2019).Footnote 5

When it comes to consciousness, the problem is, then, how can we be sure that the inevitable differences between neurotypical adult humans and nonstandard systems are not differences that make a difference?

Typically, defenders of the analogical approach to the epistemological problem of the other minds (e.g., Hyslop and Jackson Reference Hyslop and Jackson1972; for a discussion, see Godfrey-Smith Reference Godfrey-Smith, O’Rourke, Campbell and Slater2011) reply to this challenge by pointing out that the projectability of the property of interest is based on the fact that the property picks out a structural feature of reality, or, in other terms, a natural kind (i.e., a group of particulars bound together by how reality is, rather than by how observers think it is; Bird and Tobin Reference Bird, Tobin and Edward2008). If we drop a chicken’s egg and observe that it breaks, we do not necessarily need to drop a seagull’s egg, an ostrich’s egg, and so on, to infer that those eggs will most probably break if dropped. The egg’s fragility seems to be a property that depends on the egg’s material constitution, and the egg’s material constitution is a property reliably conserved across most, if not all, eggs. That is, natural kinds are supposed to be resistant enough to differences between domains and contexts so that properties of a member of the kind can be justifiably projected to other members of the kind.

This reply, based on the natural kind strategy, is also supposed to address another possible worry, namely, that analogical reasoning for other consciousnesses is ultimately based on a sample of one population and therefore cannot be informative. However, as Godfrey-Smith (Reference Godfrey-Smith, O’Rourke, Campbell and Slater2011, 39) puts it, in the case of inductive inferences referring to natural kinds, “one instance of an F would be enough, in principle, if you picked the right case and analyzed it well. Ronald Reagan is supposed to have said ‘once you’ve seen one redwood, you’ve seen them all.’”

The success of analogical reasoning to solve the problem of difference, and consequently the epistemological problem of other consciousness, thus seems to rest on whether the relationship between publicly observable properties and conscious properties is in fact a structural feature of reality, that is, if consciousness is indeed a natural kind.

Accordingly, to challenge the analogical inference, one could demonstrate that consciousness is not a natural kind. For example, it could be demonstrated that the concept “consciousness” does not pick out any single phenomenon in reality but rather a group of dissociable capacities and properties (Irvine Reference Irvine2012, Reference Irvine2017). However, most consciousness researchers implicitly operate under the assumption that consciousness is indeed a natural kind, as suggested by their attempts to uncover the neural basis of consciousness as a unitary phenomenon (e.g., Crick and Koch Reference Crick and Koch1990; Melloni et al. Reference Melloni, Mudrik, Pitts and Koch2021). Others embrace this view explicitly, and an active and ongoing research program has been leveraging this perspective (Bayne and Shea Reference Bayne and Shea2020; Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Mckilliam Reference Mckilliam2024; Shea Reference Shea2012; Shea and Bayne Reference Shea and Bayne2010).

Of course, this proclaimed consensus does not guarantee that consciousness is indeed a natural kind. To explain the core of the problem, let’s go back to P1 of the argument schema introduced earlier: “D is a collection of data showing that there is a systematic and reliable correlation between publicly observable properties and consciousness in the source domain (i.e., neurotypical adult humans).”

The specific criteria needed to ensure that the correlation of interest tracks a natural kind will vary depending on which theory of natural kinds one endorses (for discussion, see Boyd Reference Boyd2019; Khalidi Reference Khalidi2018). Yet, the minimal criterion for guaranteeing that the correlation can be validly projected is showing that it is not merely a spurious one; for example, this could be done by grounding the correlation in the presence of some mechanism that underlies the natural kind and showing that it generates similarity between members of the kind (but see Craver Reference Craver2009 for a discussion). Of course, this would require identifying this mechanism, which might not be straightforward in the case of consciousness (Bayne and Shea Reference Bayne and Shea2020; Shea Reference Shea2012). In any case, the link between publicly observable properties and consciousness should consistently and accurately reflect a structural feature of reality, not an observer-dependent artifact.

That is, the natural kind strategy should explain why the hypothesis of a direct, reliable connection between consciousness and publicly observable properties is better than other explanations. In other words, the hypothesis that those publicly observable properties track a natural kind (i.e., consciousness) must be preferred to the hypothesis that the correlation between consciousness and those publicly observable properties is spurious. To do this, one could argue that the “natural kind hypothesis” is more parsimonious, or coheres better with background knowledge, than the “spurious correlation hypothesis.” For example, following Sober (Reference Sober2000; see also Millikan Reference Millikan1999), we could claim that consciousness is a biological kind and therefore is projectable to systems similar to us in terms of shared evolutionary history. In this case, the hypothesis that the relationship between consciousness and publicly observable properties is conserved in target is more parsimonious than the hypothesis that analogous publicly observable properties are underwritten by conscious properties in one domain and unconscious properties in another domain. This is because the “consciousness hypothesis” requires only one character change (i.e., from creatures who do not have publicly observable properties correlated with consciousness to creatures who have such properties), while the “unconsciousness hypothesis” requires two character changes (i.e., from creatures who do not have those publicly observable properties to creatures who have those properties correlated with consciousness, on the one hand, and creatures who have those properties correlated with unconsciousness, on the other hand). This strategy thus builds on parsimony considerations to explain why the hypothesis that a target system is conscious is better than alternative hypotheses.

The problem with analogical reasoning is that appealing to explanatory considerations of this sort, based on parsimony or coherence with background knowledge, pushes the limits of analogical reasoning by including elements that typically figure in abductive arguments. That is, analogical reasoning on its own cannot solve the problem; it must be combined with another type of reasoning. Specifically, it must be combined with IBE-based arguments, where the best explanation is justified precisely because of theoretical virtues like parsimony and coherence with background knowledge (Douglas Reference Douglas2013; Longino Reference Longino1979; Lipton Reference Lipton, Hon and Sam2001; McMullin Reference McMullin, Curd and Psillos2008; Psillos Reference Psillos2007).

To summarize, IBE-based arguments cannot solve the extrapolator’s circle; analogical arguments can, but they need to ensure that the projected property is a natural kind property to address the problem of difference. And to ensure that we are projecting a genuine natural kind property, when we project consciousness from source to target, analogical arguments must resort to explanatory considerations generally used by IBE-based arguments, which make them partially abductive: Analogy does not seem to solve the problem of difference on its own.

Thus both the IBE-based argument and the argument from analogy, individually taken, struggle with the challenges for successful extrapolations. For IBE-based reasoning to be successful, it needs to consider similarities between source and target to solve the extrapolator’s circle, whereas analogical reasoning needs to include explanatory considerations to solve the problem of difference. Both are incomplete in this context.

To cope with this problem, we now further systematize the approach already taken by some scholars in the field of consciousness science (e.g., Barron and Klein Reference Barron and Klein2016; Birch Reference Birch2022), suggesting that analogical abduction might solve this conundrum. We provide a more systematic philosophical argument to justify this praxis, and we claim that analogical reasoning and IBE-based reasoning are in fact complementary and could be merged to deliver a stronger form of reasoning to deal successfully with the epistemological problem of other consciousnesses.

4. Analogical abduction

This is how Schurz (Reference Schurz2008, 217) defines analogical abduction:

Here one abduces a partially new concept and at the same time new laws which connect this concept with given (empirical) concepts, in order to explain the given law-like phenomenon. The concept is only partly new because it is analogical to familiar concepts, and this is the way in which this concept was discovered. So analogical abduction is driven by analogy.

The crucial point here is that abduction can confer justification on concepts posited and conjectured in the context of discovery, in virtue of their merits in explaining certain phenomena of interest. But the justification for positing such concepts is driven by analogical reasoning in the first place (Bartha Reference Bartha2010, Reference Bartha and Edward2019; Thagard Reference Thagard1988); namely, it is driven by the fact that the conjectured concepts are relevantly similar to some well-established concepts in our background knowledge.

We can formalize a general analogical abductive argument in the following way:

  1. P1. D is a collection of data about F-properties in source. Footnote 6

  2. P2. D* is a collection of data about Fʹ-properties in target.

  3. P3. There are relevant similarities between D and D* (from P1 and P2).

  4. Also,

  5. P4. We have good models that explanatorily link F-properties to G-properties in source.

  6. P5. The hypothesis H that Gʹ-properties (which are similar to G-properties) occur in target, which is formulated in virtue of P3, would explanatorily link Fʹ-properties to Gʹ-properties in target.

  7. P6. H is better than any other hypothesis.

  8. Therefore,

  9. C. H is probably true (it is probably true that Gʹ-properties occur in target).

If we apply this general schema to the epistemological problem of other consciousnesses, we have the following:

  1. P1. D is a collection of data about publicly observable properties in source.

  2. P2. D* is a collection of data about publicly observable properties in target.

  3. P3. There are relevant similarities between D and D* (from P1 and P2).

  4. Also,

  5. P4. We have good models that explanatorily link similar publicly observable properties to phenomenal properties in source.

  6. P5. The hypothesis H that phenomenal properties are instantiated in target (which we justifiably formulate because of P3) would explanatorily link publicly observable properties and phenomenal properties in target, given that similar observable properties are explanatorily linked to phenomenal properties in source (from P3 and P4).Footnote 7

  7. P6. H is better than any other hypothesis.

  8. Therefore,

  9. C. It is probably true that phenomenal properties are instantiated in target.

The concept of “consciousness-in-other-systems” (“target-consciousness”) is thus posited in virtue of the fact that we master, from the first-person perspective, the concept of “consciousness-in-us” (“source-consciousness”) and that we possess a scientifically informed model of an explanatory relationship between consciousness and some publicly observable properties in us. Given such a reasonably well-established model, once we detect similar publicly observable properties in target, it seems that the best hypothesis, in terms of parsimony and coherence with background knowledge,Footnote 8 is that those properties are also related to consciousness in target. This model of the relationship between consciousness and publicly observable properties in source does not need to be (or be derived from) a full-fledged theory of consciousness but can also be derived from a more general framework that captures only some features of consciousness and some of the publicly observable properties related to it. What is relevant for the argument to remain abductive is that those features be explanatorily connected with publicly observable properties and not purely correlated with them. In this sense, analogical abductive arguments can complement both “theory-heavy” (i.e., based on full-fledged and specific theories of consciousness; see Seth and Bayne Reference Seth and Bayne2022) and “theory-light” (Birch Reference Birch2022) or “test-based” approaches (Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024).

Thus we have implemented Schurz’s definition of analogical abduction on the epistemological problem of other consciousnesses: target-consciousness is the partially new concept, abduced to account for publicly observable properties in target, but it is only partially new because it is analogical to the familiar concept of consciousness-in-us, or source-consciousness, and in virtue of this analogy, the concept of target-consciousness has been discovered. In the preceding argument, P1, P2, and P3 speak directly to analogical considerations, whereas P4, P5, and P6 speak to explanatory ones.

A final clarification pertains to the nature of the explanatory link mentioned in P4. This discussion interacts with the issue of what counts as relevant observable properties for consciousness. A first interpretation of the nature of this link is that F-properties are causally explained by G-properties. In the case of consciousness, F-properties could correspond to functional and/or behavioral properties, while G-properties would be conscious properties. Under this interpretation, the primary evidence for consciousness is given by functional and/or behavioral properties, and, courtesy of analogical abductive arguments, consciousness in other systems would be posited to causally explain functional and behavioral properties observed in nonstandard systems. For example, a theory like the global workspace theory (GNWT) (Dehaene and Naccache Reference Dehaene and Naccache2001; Mashour et al. Reference Mashour, Roelfsema, Changeux and Dehaene2020) posits that certain functions, such as the ability to integrate and maintain information over time, can be performed only if information is broadcast into a global workspace that sustains and shares that information with many consumer systems. Accessibility to such global workspace just is consciousness. Those functional abilities can be evidenced by mental chaining of operations (Sackur and Dehaene Reference Sackur and Dehaene2009) or global violation detection (Bekinschtein et al. Reference Bekinschtein, Dehaene, Rohaut, Tadel, Cohen and Naccache2009), just to cite a few, because these functions are causally connected to consciousness and cannot be performed unconsciously. Thus, if a target system is able to perform tasks that require such abilities (e.g., it detects a global variation in a sequence of auditory stimuli), then GNWT proponents would be entitled to formulate analogical abductive arguments based on the fact that those behaviors are causally explained by accessibility of information to a global workspace, that is, by consciousness. Similarly, Birch’s theory-light approach assumes that the identification of some cognitive functions that are facilitated by consciousness suffices to extrapolate from the human case to nonhuman animals. Even without endorsing a specific theory, this approach can still take advantage of analogical abductive arguments by positing that the nature of the explanatory link between consciousness and a cluster of publicly observable properties (e.g., unlimited associative learning; Birch, Ginsburg, and Jablonka Reference Birch, Ginsburg and Jablonka2020) must be causal: Consciousness is assumed to be the cause of certain functions and behaviors in us; if similar functions and behaviors are observed in certain target systems, one could formulate an analogical abductive argument to justify the attribution of consciousness to those systems. Thus, independently of whether one operates under the tenets of a specific theory, if functional/behavioral properties are taken to be the primary evidence for consciousness, then the nature of the explanatory link in the analogical abductive argument will likely be causal.

But this causal explanatory link does not hold if implementation or mechanistic properties (e.g., neurobiological properties) are taken to be primary evidence for consciousness, because implementation properties are not causally explained by conscious properties. In this case, we can take advantage of a type of abduction that Harman (Reference Harman1986, 68) and Lipton (Reference Lipton2004) have called “inference from the best explanation.” In this instance, we do not observe what could be explained by the hypothesis; rather, we observe what could explain a fact, or a property, that we are licensed to posit in virtue of that observation. With Lipton’s example, from the observation that it is cold outside, I am justified to infer that the car will not start. The observation (i.e., “it is cold outside”) is not explained by the hypothesis that the car will not start; rather, as Lipton puts it, given relevant background knowledge, I am justified to infer that the car will not start from the observation that it is cold outside because if it were true that the car will not start, the cold would be the best explanation for it (Lipton Reference Lipton2004, 64). In the inference from the best explanation, the observation is the explanation,Footnote 9 not the conjecture. Thus, in the case of other consciousnesses, inference from the best explanation could license the conclusion that publicly observed properties explain the consciousness of the target system because if it were true that the target system is conscious, those properties would be the best explanation of that fact. Therefore, if we take the primary evidence for consciousness to be neurobiological/mechanistic properties, we can posit conscious properties (Gʹ-properties) in target because, given what we know about the neural underpinnings of consciousness in us (i.e., in source), if it were true that the system of interest in target is conscious, the observed neurobiological/mechanistic properties in target (Fʹ-properties) would be the best explanation of that fact.

For example, the recurrent processing theory (RPT) (Lamme Reference Lamme2006; Lamme and Roelfsema Reference Lamme and Roelfsema2000) maintains that consciousness corresponds to the implementation of information processing on local feedback loops (involving synaptic plasticity) in sensory areas of the brain. If a target system displayed information processing implemented through feedback loops, RPT proponents could formulate an analogical abductive argument to justify the attribution of consciousness to the target system. The explanatory link in their P4 would be constitutive or mechanistic, insofar as the explanatory model on which they rely (i.e., RPT) is based on a constitutive relationship between consciousness and the explanatorily relevant publicly observable property (i.e., feedback loops). Thus, if mechanistic/implementational properties are taken to be the primary evidence for consciousness, then the nature of the explanatory link between consciousness and publicly observable properties will likely be constitutive or mechanistic (Craver Reference Craver2007), rather than causal (for discussion, see Prasetya Reference Prasetya2021).

Thus different types of explanatory links between phenomenal properties and publicly observable properties can be given, depending on what researchers consider to be the relevant evidence for consciousness. Analogical abductive arguments will accordingly take different specific forms.Footnote 10

The analogical abductive approach is thus highly promising and can be easily applied to existing theories and accounts of consciousness. Yet it is not devoid of limitations, which we address in the next section, where we critically examine this strategy and its potential.

5. The prospects and limitations of analogical abduction

Despite its virtues, analogical abduction is not a silver bullet and faces several problems. As a starting point, we focus on the aforementioned challenges for successful extrapolations, which analogical abduction manages to meet, then continue to more contentious issues. First, we consider the extrapolator’s circle, namely, the problem of explaining why we are justified in believing that a target system is conscious without already assuming that the best explanation for source-consciousness works in target. As opposed to the IBE-based argument, analogical abduction avoids this problem because the link between observable properties and consciousness in target is not simply assumed; rather, it is justified in virtue of the similarity between properties in target and certain properties in source (thus, this strategy goes beyond the IBE-based one by relying on analogical reasoning). Again, analogical considerations drive the abduction process, and that is why the target system is not considered conscious simply in virtue of the assumption that our best explanation for source-consciousness works for target too. Rather, the hypothesis that the target system might be conscious is grounded in the similarity between target and source. The key point is that this similarity justifies the formulation of the hypothesis that the target system might be conscious—a hypothesis that could not be justifiably formulated without these analogical considerations.

Second is the problem of difference: How can we determine whether a system is similar enough to us to justify extrapolations about its consciousness? A possible solution can be based on Gentner’s (Reference Gentner1983) structure-mapping theory, which influenced Guala’s (Reference Guala2005, 180) thesis that extrapolation is possible only when the source and target systems “belong to structurally similar mechanisms,” as well as Steel’s (Reference Steel2007, Reference Steel2010) notion of comparative process tracing (see also Guala Reference Guala2010). According to this approach, similarity is structurally grounded: Extrapolations are justified insofar as the mechanistic processes or the properties in target have the same structure as the relevant processes or properties in source. Translated to consciousness science, this would mean that extrapolations about consciousness are justified only when there is a structure-preserving mapping between the consciousness-related properties in source and the properties observed in target. This means that if we had a compelling model of source-consciousness that explanatorily relates consciousness to a (causal or constitutive) structure of publicly observable properties (thereby going beyond the analogical strategy and relying on aspects of the IBE-based one), we could project consciousness to any system that exhibits the same structure of publicly observable properties, independently of all the other possible differences.

Analogical abductions thus inherit the inferential relevance of explanatory considerations from the IBE-based argument and the importance of structural similarity for the projectability of the property of interest from the argument from analogy.

Admittedly, however, the analogical abductive strategy is limited, because structural similarity is a property that comes in degrees, and it is unclear what level of similarity suffices to justify extrapolations. To make the extrapolative leap, we should define a “similarity threshold” above which the inference is justified. Some have suggested that this threshold might be based on a cluster of similar properties between source and target (Birch Reference Birch2022), but this still requires a definition of the minimal size of the cluster that justifies the extrapolative leap (Shevlin Reference Shevlin2021). Moreover, although we consider this strategy to be promising for extrapolating consciousness to biological creatures, it might be more difficult to apply to artificial systems. This is mainly because of three reasons: In the case of artificial consciousness, (1) we cannot rely on evolutionary similarities, (2) metaphysical debates concerning the substrate neutrality of consciousness are more prominent (Seth Reference Seth2024; Shiller Reference Shiller2024), and (3) our antecedent knowledge that the cluster of markers associated with consciousness in humans has been deliberately designed to be displayed by an artificial entity might undermine the view that this cluster tracks consciousness in this domain.

A third problem for the analogical abduction strategy is that it presently has a limited “epistemic force.” In the philosophical literature (Calzavarini and Cevolani Reference Calzavarini and Cevolani2022; Schurz Reference Schurz2008), abductive arguments are considered to be strong if they justify the acceptance of a hypothesis as true, whereas they are considered to be weak if they only select hypotheses as interesting conjectures that require further empirical testing.

Given the current disagreement in consciousness science and the lack of a well-established, and specific, theory of consciousness (Francken et al. Reference Francken, Beerendonk, Molenaar, Fahrenfort, Kiverstein, Seth and van Gaal2022; Yaron et al. Reference Yaron, Melloni, Pitts and Mudrik2022), analogical abductive arguments for nonstandard systems seem to be weak (Baetu Reference Baetu2024). If so, they can be used only to justify formulating the hypothesis that a certain target system is conscious, rather than accepting the hypothesis that it actually is conscious. Because we already relied on the relevant evidence to build the analogical abductive argument and formulate the hypothesis that the system is conscious, we seem to lack the methodological basis for passing from hypothesis formulation to hypothesis acceptance. Thus, until a better understanding of consciousness is available, this approach can offer only a partial solution for the epistemological problem of other consciousnesses.

A possible way to establish such a methodological basis might lie in the iterative natural kind approach (Bayne et al. Reference Bayne, Seth, Massimini, Shepherd, Cleeremans, Fleming and Malach2024; Mckilliam Reference Mckilliam2024), because its iterative nature can allow a gradual increase in our confidence in whether the target system is conscious (see also Baetu Reference Baetu2024). However, it is not clear whether this strategy can deliver strong analogical abductive inferences, rather than just less weak inferences.

In the meantime, analogical abductive arguments could still be helpful in delivering weakly justified extrapolations. In several decision-making contexts, especially those related to substantial ethical and societal implications, it is reasonable to lower the evidential bar and enact evidence-based policies that can preempt harm, even if the evidence is only partial. As Steel (cited in Birch Reference Birch2017, 4) puts it, “Policy [should] not be susceptible to paralysis by scientific uncertainty” (see also Birch Reference Birch2023; Johnson Reference Johnson and Syd2016). Analogical abductive arguments can serve as the tools to navigate the uncertainty about consciousness in nonstandard systems and to provide attributions of consciousness that, albeit weak, can still be sufficient for informed decision-making.

To summarize, analogical abductive arguments are currently facing a speed/accuracy trade-off: They can deliver strongly justified and accurate conclusions either via an established theory of consciousness or by using the iterative natural kind strategy; both options, however, require a long process that will not likely be completed in the near future. However, consciousness science is already needed to inform decision-making and regulations about various nonstandard systems. In the short run, analogical abductive arguments can provide some degree of justification for attributions of consciousness to these systems, but because these attributions can be only weakly justified, the risk of inaccurate attributions is high.

6. Conclusion

In this article, we maintain that arguments based on analogical abduction capture and systematize the reasoning strategy employed by many consciousness researchers interested in attributing consciousness to nonstandard systems and that this strategy is indeed the most promising for extrapolating the presence of consciousness in such systems. Analogical abductive arguments do better than standard analogical arguments and IBE-based arguments with respect to the two challenges that any extrapolative inference must satisfy, namely, the extrapolator’s circle and the problem of difference. However, further research is needed to allow analogical abductive arguments to overcome some of the limitations they are still facing. For example, it is not clear what degree of similarity along consciousness-relevant dimensions is sufficient to justify strong abductions. Moreover, it seems possible that different degrees of similarity will be required, depending on the type of conclusion in which we are interested: Inferring that a system is conscious (rather than not) might require a lower degree of similarity than a conclusion about what the system is conscious of.

Other problems, although not directly related to the structure of analogical abductive arguments, might limit their applicability: The current status of consciousness science still does not provide a clear view of what type of publicly observable properties are relevant to consciousness specifically, and consensus on the best theory of source-consciousness is far from being established.

Thus, the applicability of analogical abductions is currently limited. However, we maintain that analogical abductions still provide the best available justificatory option for inferring consciousness in nonstandard systems and can be useful in deriving weakly justified conclusions that can inform practical decision-making. We have argued that analogical abductive arguments provide the blueprint for justified attributions of consciousness to nonstandard systems. This is important because a clarification of the argumentative structure underlying our ascriptions of consciousness can help consciousness scholars to assess the soundness of existing arguments for attributing consciousness to nonstandard systems and to formulate stronger arguments of this type.

Acknowledgements

N.N. is funded by an Azrieli International Postdoctoral Fellowship. L.M. is CIFAR Tanenbaum Fellow in the Brain, Mind, and Consciousness program. We thank all members of the Mudrik Lab at Tel Aviv University for their comments on previous drafts of this article, in particular Gennadiy Belonosov, Rony Hirschhorn, Maor Schreiber, and Amir Tal. N.N. thanks Andy McKilliam for reading and providing comments on an early version of this work. A previous version of this article was presented at ASSC27, where we received very helpful comments and feedback from the audience.

Footnotes

1 We take neurotypical adult humans to be examples of “standard systems,” for which the presence of consciousness is not doubted. The primary generalization in which we are interested here is thus between these standard systems and nonstandard systems, rather than between one and other people, as in more traditional discussions of the problem of other minds.

2 This is because capacities are supposed to be intrinsic features of an entity that are stable across different background conditions: One could explain the combustion of wood by referring to the wood’s capacity to burn. The power, or capacity, might not be manifested if the contextual conditions are not right (e.g., lack of oxygen), but it is supposed to exist nonetheless.

3 For the purposes of this article, we lump together neurobiological evidence and functional/behavioral evidence under the umbrella term of publicly observable properties. We remain neutral here on whether the best way to approach the epistemological problem for other consciousnesses is via the neurobiological route or the functional/behavioral one; see Block (Reference Block2007) and Usher et al. (Reference Usher, Negro, Jacobson and Tsuchiya2023) for discussion.

4 This does not mean that arguments from analogy are never affected by the extrapolator’s circle (for a discussion, see Steel Reference Steel2010). In the case of consciousness, however, the fact that standard systems are not necessarily assumed to be good models for nonstandard systems seems to be enough to apply a charitable reading to the analysis.

5 This project was sponsored by the World Bank to reduce malnutrition in Indian communities by supplying food and better nutritional knowledge to mothers. The success of the program was not replicated in Bangladesh because of the differences in responsibility for the children’s nutrition within the family. See Cartwright (Reference Cartwright2012) for a comprehensive discussion.

6 The argument presupposes that phenomena and their properties are causally related to the observed data and that explanatory models (and associated hypotheses), although constructed upon data, are meant to explain those phenomena, not the data themselves (as argued by Bogen and Woodward Reference Bogen and Woodward1988).

7 To be precise, the hypothesis should posit that there are phenomenal properties in target that are similar but not identical to phenomenal properties in source. However, our primary focus here is on whether phenomenal properties are present, and because of this, all we require is that the properties of interest be phenomenal.

8 Ideally, the goodness of a scientific hypothesis should be systematized through a comprehensive taxonomy of explanatory virtues (for a discussion, see Keas Reference Keas2018).

9 For Lipton (Reference Lipton, Hon and Sam2001), the nature of this explanation should be causal. We are not making that assumption here.

10 The conclusion of these arguments would be justified only if we had good knowledge of the functional and behavioral profiles associated with consciousness, on the one hand (for arguments based on explanatory links of a causal nature), or its mechanistic underpinnings, on the other hand (for arguments based on explanatory links of a constitutive/mechanistic nature). However, the current status of consciousness science does not provide us with good knowledge of either the functional profile of consciousness or its neural mechanisms (see, e.g., Francken et al. Reference Francken, Beerendonk, Molenaar, Fahrenfort, Kiverstein, Seth and van Gaal2022; Yaron et al. Reference Yaron, Melloni, Pitts and Mudrik2022). This limitation reduces the inferential strength of analogical abductive arguments. We elaborate on this further in section 4.

References

Albantakis, Larissa, Barbosa, Leonardo, Findlay, Graham, Grasso, Matteo, Haun, Andrew M., Marshall, William, Mayner, William G. P., et al. 2023. “Integrated Information Theory (IIT) 4.0: Formulating the Properties of Phenomenal Existence in Physical Terms.” PLOS Computational Biology 19 (10):e1011465. https://doi.org/10.1371/journal.pcbi.1011465.Google Scholar
Andrews, Kristin. 2024. “‘All Animals Are Conscious’: Shifting the Null Hypothesis in Consciousness Science.” Mind and Language 39 (3):415–33. https://doi.org/10.1111/mila.12498.Google Scholar
Avramides, Anita. 2000. Other Minds. Routledge.Google Scholar
Baetu, Tudor M. 2024. “Extrapolating Animal Consciousness.” Studies in History and Philosophy of Science 104:150–59. https://doi.org/10.1016/j.shpsa.2024.03.001.Google Scholar
Barron, Andrew B., and Klein, Colin. 2016. “What Insects Can Tell Us About the Origins of Consciousness.” Proceedings of the National Academy of Sciences of the United States of America 113 (18):4900–8. https://doi.org/10.1073/pnas.1520084113.Google Scholar
Bartha, Paul. 2010. By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments. Oxford University Press.Google Scholar
Bartha, Paul. 2019. “Analogy and Analogical Reasoning.” In The Stanford Encyclopedia of Philosophy, edited by Edward, N. Zalta. https://plato.stanford.edu/entries/reasoning-analogy/.Google Scholar
Bayne, Tim. 2018. “On the Axiomatic Foundations of the Integrated Information Theory of Consciousness.” Neuroscience of Consciousness 2018 (1):niy007. https://doi.org/10.1093/nc/niy007.Google Scholar
Bayne, Tim, Frohlich, Joel, Cusack, Rhodri, Moser, Julia, and Naci, Lorina. 2023. “Consciousness in the Cradle: On the Emergence of Infant Experience.” Trends in Cognitive Sciences 27 (12):1135–49. https://doi.org/10.1016/j.tics.2023.08.018.Google Scholar
Bayne, Tim, Seth, Anil K., and Massimini, Marcello. 2020. “Are There Islands of Awareness?Trends in Neurosciences 43 (1):616. https://doi.org/10.1016/j.tins.2019.11.003.Google Scholar
Bayne, Tim, Seth, Anil K., Massimini, Marcello, Shepherd, Joshua, Cleeremans, Axel, Fleming, Stephen M., Malach, Rafael, et al. 2024. “Tests for Consciousness in Humans and Beyond.” Trends in Cognitive Sciences 28 (5):454–66. https://doi.org/10.1016/j.tics.2024.01.010.Google Scholar
Bayne, Tim, and Shea, Nicholas. 2020. “Consciousness, Concepts and Natural Kinds.” Philosophical Topics 48 (1):6583. https://doi.org/10.5840/philtopics20204814.Google Scholar
Bekinschtein, Tristan A., Dehaene, Stanislas, Rohaut, Benjamin, Tadel, François, Cohen, Laurent, and Naccache, Lionel. 2009. “Neural Signature of the Conscious Processing of Auditory Regularities.” Proceedings of the National Academy of Sciences of the United States of America 106 (5):1672–77. https://doi.org/10.1073/pnas.0809667106.Google Scholar
Birch, J., Schnell, A. K., and Clayton, N. S. 2020. “Dimensions of Animal Consciousness.Trends in Cognitive Sciences 24 (10):789801. https://doi.org/10.1016/j.tics.2020.07.007.Google Scholar
Birch, Jonathan. 2017. “Animal Sentience and the Precautionary Principle.” Animal Sentience 16 (1). https://doi.org/10.51291/2377-7478.1200.Google Scholar
Birch, Jonathan. 2022. “The Search for Invertebrate Consciousness.” Noûs 56 (1):133–53. https://doi.org/10.1111/nous.12351.Google Scholar
Birch, Jonathan. 2023. “Medical AI, Inductive Risk and the Communication of Uncertainty: The Case of Disorders of Consciousness.” Journal of Medical Ethics. https://doi.org/10.1136/jme-2023-109424.Google Scholar
Birch, Jonathan. 2024. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Oxford University Press.Google Scholar
Birch, Jonathan, and Browning, Heather. 2021. “Neural Organoids and the Precautionary Principle.” American Journal of Bioethics 21 (1):5658. https://doi.org/10.1080/15265161.2020.1845858.Google Scholar
Birch, Jonathan, Ginsburg, Simona, and Jablonka, Eva. 2020. “Unlimited Associative Learning and the Origins of Consciousness: A Primer and Some Predictions.” Biology and Philosophy 35 (6):56. https://doi.org/10.1007/s10539-020-09772-0.Google Scholar
Bird, Alexander, and Tobin, Emma. 2008. “Natural Kinds.” In Standford Encyclopedia of Philosophy, edited by Edward, N. Zalta. https://plato.stanford.edu/entries/natural-kinds/.Google Scholar
Block, Ned. 2002. “The Harder Problem of Consciousness.” Journal of Philosophy 99 (8):391425. https://doi.org/10.2307/3655621.Google Scholar
Block, Ned. 2007. “Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience.” Behavioral and Brain Sciences 30 (5–6):481–99. https://doi.org/10.1017/S0140525X07002786.Google Scholar
Bogen, James, and Woodward, James. 1988. “Saving the Phenomena.” Philosophical Review 97 (3):303–52. https://doi.org/10.2307/2185445.Google Scholar
Boyd, Richard. 2019. “Rethinking Natural Kinds, Reference and Truth: Towards More Correspondence with Reality, Not Less.” Synthese 198 (Suppl. 12):2863–903. https://doi.org/10.1007/s11229-019-02138-4.Google Scholar
Butlin, Patrick, Long, Robert, Elmoznino, Eric, Bengio, Yoshua, Birch, Jonathan, Constant, Axel, Deane, George, et al. 2023. “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.” ArXiv. https://doi.org/10.48550/arXiv.2308.08708.Google Scholar
Calzavarini, Fabrizio, and Cevolani, Gustavo. 2022. “Abductive Reasoning in Cognitive Neuroscience: Weak and Strong Reverse Inference.” Synthese 200 (2):70. https://doi.org/10.1007/s11229-022-03585-2.Google Scholar
Carruthers, Peter. 2019. Human and Animal Minds: The Consciousness Questions Laid to Rest. Oxford University Press.Google Scholar
Cartwright, Nancy. 1994. Nature’s Capacities and Their Measurement. Oxford University Press.Google Scholar
Cartwright, Nancy. 2012. “Presidential Address: Will This Policy Work for You? Predicting Effectiveness Better: How Philosophy Helps.” Philosophy of Science 79 (5):973–89. https://doi.org/10.1086/668041.Google Scholar
Cartwright, Nancy, and Hardie, Jeremy. 2012. Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford University Press.Google Scholar
Chalmers, David J. 1996. The Conscious Mind: In Search of a Fundamental Theory . Philosophy of Mind. Oxford University Press.Google Scholar
Chalmers, David J. 2023. “Could a Large Language Model Be Conscious?Boston Review, August 9. https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/.Google Scholar
Ciaunica, Anna, Safron, Adam, and Delafield-Butt, Jonathan. 2021. “Back to Square One: The Bodily Roots of Conscious Experiences in Early Life.” Neuroscience of Consciousness 2021 (2):niab037. https://doi.org/10.1093/nc/niab037.Google Scholar
Craver, Carl F. 2002. “Structures of Scientific Theories.” In The Blackwell Guide to the Philosophy of Science, edited by Machamer, Peter and Silberstein, Michael, 5579. Blackwell.Google Scholar
Craver, Carl F. 2007. Explaining the Brain. Oxford University Press.Google Scholar
Craver, Carl F. 2009. “Mechanisms and Natural Kinds.” Philosophical Psychology 22 (5):575–94. https://doi.org/10.1080/09515080903238930.Google Scholar
Crick, F., and Koch, Christof. 1990. “Toward a Neurobiological Theory of Consciousness.” Seminars in the Neurosciences 2:263–75.Google Scholar
Croxford, James, and Bayne, Tim. 2024. “The Case Against Organoid Consciousness.” Neuroethics 17 (1):13. https://doi.org/10.1007/s12152-024-09548-3.Google Scholar
Dehaene, Stanislas, Lau, Hakwan, and Kouider, Sid. 2017. “What Is Consciousness, and Could Machines Have It?Science 358 (6362):486–92. https://doi.org/10.1126/science.aan8871.Google Scholar
Dehaene, Stanislas, and Naccache, Lionel. 2001. “Towards a Cognitive Neuroscience of Consciousness: Basic Evidence and a Workspace Framework.” Cognition 79 (1):137. https://doi.org/10.1016/S0010-0277(00)00123-2.Google Scholar
Dehaene-Lambertz, Ghislaine. 2024. “Perceptual Awareness in Human Infants: What Is the Evidence?Journal of Cognitive Neuroscience 36 (8):1599–609. https://doi.org/10.1162/jocn_a_02149.Google Scholar
Douglas, Heather. 2013. “The Value of Cognitive Values.” Philosophy of Science 80 (5):796806. https://doi.org/10.1086/673716.Google Scholar
Dung, Leonard. 2022. “Assessing Tests of Animal Consciousness.” Consciousness and Cognition 105:103410. https://doi.org/10.1016/j.concog.2022.103410.Google Scholar
Dung, Leonard. 2023. “Tests of Animal Consciousness Are Tests of Machine Consciousness.” Erkenntnis 90:1323–42. https://doi.org/10.1007/s10670-023-00753-9.Google Scholar
Elamrani, A., and Yampolskiy, R. V.. 2019. “Reviewing Tests for Machine Consciousness.” Journal of Consciousness Studies 26 (5–6):3564.Google Scholar
Francken, Jolien C., Beerendonk, Lola, Molenaar, Dylan, Fahrenfort, Johannes J., Kiverstein, Julian D., Seth, Anil K., and van Gaal, Simon. 2022. “An Academic Survey on Theoretical Foundations, Common Assumptions and the Current State of Consciousness Science.” Neuroscience of Consciousness 2022 (1):niac011. https://doi.org/10.1093/nc/niac011.Google Scholar
Frohlich, Joel, Bayne, Tim, Crone, Julia S., DallaVecchia, Alessandra, Kirkeby-Hinrup, Asger, Mediano, Pedro A. M., Moser, Julia, et al. 2023. “Not with a ‘Zap’ but with a ‘Beep’: Measuring the Origins of Perinatal Experience.” NeuroImage 273:120057. https://doi.org/10.1016/j.neuroimage.2023.120057.Google Scholar
Gentner, Dedre. 1983. “Structure-Mapping: A Theoretical Framework for Analogy.” Cognitive Science 7 (2):155–70. https://doi.org/10.1016/S0364-0213(83)80009-3.Google Scholar
Godfrey-Smith, Peter. 2011. “Induction, Samples, and Kinds.” In Carving Nature at Its Joints: Natural Kinds in Metaphysics and Science, edited by O’Rourke, Michael, Campbell, Joseph Keim, and Slater, Matthew H., 3352. MIT Press.Google Scholar
Guala, Francesco. 2005. The Methodology of Experimental Economics. Cambridge University Press.Google Scholar
Guala, Francesco. 2010. “Extrapolation, Analogy, and Comparative Process Tracing.” Philosophy of Science 77 (5):1070–82. https://doi.org/10.1086/656541.Google Scholar
Halina, Marta, Harrison, David, and Klein, Colin. 2022. “Evolutionary Transition Markers and the Origins of Consciousness.” Journal of Consciousness Studies 29 (3–4):6277. https://doi.org/10.53765/20512201.29.3.077.Google Scholar
Hameroff, Stuart, and Muotri, Alysson R.. 2020. “Testing for Consciousness in Cerebral Organoids.” Trends in Cell and Molecular Biology 15:4357.Google Scholar
Harman, Gilbert. 1986. Change in View: Principles of Reasoning. Vol. 1. MIT Press.Google Scholar
Hesse, Mary. 1965. “Models and Analogies in Science.” British Journal for the Philosophy of Science 16 (62):161–63. https://doi.org/10.1093/bjps/XVI.62.161.Google Scholar
Heyes, Cecilia. 2008. “Beast Machines? Questions of Animal Consciousness.” In Frontiers of Consciousness, edited by Weiskrantz, Lawrence and Davies, Martin, 259–74. Oxford University Press.Google Scholar
Hiddleston, Eric. 2005. “Causal Powers.” British Journal for the Philosophy of Science 56 (1):2759. https://doi.org/10.1093/phisci/axi102.Google Scholar
Hildt, Elisabeth. 2022. “The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.” AJOB Neuroscience 14 (2):5871. https://doi.org/10.1080/21507740.2022.2148773.Google Scholar
Hyslop, Alec. 1995. “The Analogical Inference to Other Minds.” In Other Minds, edited by Hyslop, Alec, 4170. Springer.Google Scholar
Hyslop, A., and Jackson, F. C.. 1972. “The Analogical Inference to Other Minds.” American Philosophical Quarterly 9 (2):168–76. http://www.jstor.org/stable/20009435.Google Scholar
Irvine, Elizabeth. 2012. Consciousness as a Scientific Concept: A Philosophy of Science Perspective. Studies in Brain and Mind 5. Springer.Google Scholar
Irvine, Elizabeth. 2017. “Explaining What?Topoi 36 (1):95106. https://doi.org/10.1007/s11245-014-9273-4.Google Scholar
Jeziorski, Jacob, Brandt, Reuven, Evans, John H., Campana, Wendy, Kalichman, Michael, Thompson, Evan, Goldstein, Lawrence, Koch, Christof, and Muotri, Alysson R.. 2023. “Brain Organoids, Consciousness, Ethics and Moral Status.” Seminars in Cell and Developmental Biology 144:97102. https://doi.org/10.1016/j.semcdb.2022.03.020.Google Scholar
Johnson, L. Syd, M. 2016. “Inference and Inductive Risk in Disorders of Consciousness.” American Journal of Bioethics Neuroscience 7 (1):3543. https://doi.org/10.1080/21507740.2016.1150908.Google Scholar
Kanai, Ryota, and Fujisawa, Ippei. 2024. “Toward a Universal Theory of Consciousness.” Neuroscience of Consciousness 2024 (1):niae022. https://doi.org/10.1093/nc/niae022.Google Scholar
Kazazian, Karnig, Edlow, Brian L., and Owen, Adrian M.. 2024. “Detecting Awareness After Acute Brain Injury.” The Lancet Neurology 23 (8):836–44. https://doi.org/10.1016/S1474-422(24)00209-6.Google Scholar
Keas, Michael N. 2018. “Systematizing the Theoretical Virtues.” Synthese 195 (6):2761–93. https://doi.org/10.1007/s11229-017-1355-6.Google Scholar
Khalidi, Muhammad Ali. 2018. “Natural Kinds as Nodes in Causal Networks.” Synthese 195 (4):1379–96. https://doi.org/10.1007/s11229-015-0841-y.Google Scholar
LaFollette, Hugh, and Shanks, Niall. 1996. Brute Science: Dilemmas of Animal Experimentation. Routledge.Google Scholar
Lamme, Victor A. F. 2006. “Towards a True Neural Stance on Consciousness.” Trends in Cognitive Sciences 10 (11):494501. https://doi.org/10.1016/j.tics.2006.09.001.Google Scholar
Lamme, Victor A. F., and Roelfsema, Pieter R.. 2000. “The Distinct Modes of Vision Offered by Feedforward and Recurrent Processing.” Trends in Neuroscience 23 (11):571–79. https://doi.org/10.1016/S0166-2236(00)01657-X.Google Scholar
Lau, Hakwan, and Michel, Matthias. 2019. “On the Dangers of Conflating Strong and Weak Versions of a Theory of Consciousness.” PsyArXiv. https://doi.org/10.31234/osf.io/hjp3s.Google Scholar
Lavazza, Andrea, and Massimini, Marcello. 2018. “Cerebral Organoids: Ethical Issues and Consciousness Assessment.” Journal of Medical Ethics 44 (9):606–10. https://doi.org/10.1136/medethics-2017-104555.Google Scholar
Levy, Neil. 2024. “Consciousness Ain’t All That.” Neuroethics 17 (2):114. https://doi.org/10.1007/s12152-024-09559-0.Google Scholar
Lipton, Peter. 2001. “What Good Is an Explanation?” In Explanation: Theoretical Approaches and Applications, edited by Hon, Giora and Sam, S. Rakover, 4359. Springer.Google Scholar
Lipton, Peter. 2004. Inference to the Best Explanation. Routledge.Google Scholar
Longino, Helen E. 1979. “Evidence and Hypothesis: An Analysis of Evidential Relations.” Philosophy of Science 46 (1):3556. https://doi.org/10.1086/288849.Google Scholar
Mak, Isabella Wy, Evaniew, Nathan, and Ghert, Michelle. 2014. “Lost in Translation: Animal Models and Clinical Trials in Cancer Treatment.” American Journal of Translational Research 6 (2):114–18.Google Scholar
Marchionni, Caterina, and Reijula, Samuli. 2019. “What Is Mechanistic Evidence, and Why Do We Need It for Evidence-Based Policy?Studies in History and Philosophy of Science, Part A 73:5463. https://doi.org/10.1016/j.shpsa.2018.08.003.Google Scholar
Mashour, George A., Roelfsema, Pieter, Changeux, Jean-Pierre, and Dehaene, Stanislas. 2020. “Conscious Processing and the Global Neuronal Workspace Hypothesis.” Neuron 105 (5):776–98. https://doi.org/10.1016/j.neuron.2020.01.026.Google Scholar
Mckilliam, Andy. 2024. “Natural Kind Reasoning in Consciousness Science: An Alternative to Theory Testing.” Noûs 59 (3):634–51. https://doi.org/10.1111/nous.12526.Google Scholar
McMullin, Ernan. 2008. “The Virtues of a Good Theory.” In The Routledge Companion to Philosophy of Science, edited by Curd, Martin and Psillos, Stathis, 498508. Routledge.Google Scholar
Mediano, Pedro A. M., Rosas, Fernando E., Bor, Daniel, Seth, Anil K., and Barrett, Adam B.. 2022. “The Strength of Weak Integrated Information Theory.” Trends in Cognitive Sciences 26 (8):646–55. https://doi.org/10.1016/j.tics.2022.04.008.Google Scholar
Melloni, Lucia, Mudrik, Liad, Pitts, Michael, and Koch, Christof. 2021. “Making the Hard Problem of Consciousness Easier.” Science 372 (6545):911–12. https://doi.org/10.1126/science.abj3259.Google Scholar
Melnyk, Andrew. 1994. “Inference to the Best Explanation and Other Minds.” Australasian Journal of Philosophy 72 (4):482–91. https://doi.org/10.1080/00048409412346281.Google Scholar
Merker, Bjorn, Williford, Kenneth, and Rudrauf, David. 2021. “The Integrated Information Theory of Consciousness: A Case of Mistaken Identity.” Behavioral and Brain Sciences 45:e41. https://doi.org/10.1017/S0140525X21000881.Google Scholar
Millikan, Ruth Garrett. 1999. “Historical Kinds and the ‘Special Sciences.’Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 95 (1/2):4565. https://doi.org/10.1023/A:1004532016219.Google Scholar
Moser, Julia, Schleger, Franziska, Weiss, Magdalene, Sippel, Katrin, Semeia, Lorenzo, and Preissl, Hubert. 2021. “Magnetoencephalographic Signatures of Conscious Processing Before Birth.” Developmental Cognitive Neuroscience 49:100964. https://doi.org/10.1016/j.dcn.2021.100964.Google Scholar
Mudrik, Liad, Boly, Melanie, Dehaene, Stanislas, Fleming, Stephen M., Lamme, Victor, Seth, Anil, and Melloni, Lucia. 2025. “Unpacking the Complexities of Consciousness: Theories and Reflections.” Neuroscience and Biobehavioral Reviews 170:106053. https://doi.org/10.1016/j.neubiorev.2025.106053.Google Scholar
Mudrik, Liad, Mylopoulos, Myrto, Negro, Niccolo, and Schurger, Aaron. 2023. “Theories of Consciousness and a Life Worth Living.” Current Opinion in Behavioral Sciences 53:101299. https://doi.org/10.1016/j.cobeha.2023.101299.Google Scholar
Negro, Niccolo, and Mudrik, Liad. 2025. “Testing for Consciousness Beyond Consensus Cases.” PhilSci Archive. https://philsci-archive.pitt.edu/25164/. Google Scholar
Owen, Adrian M., Coleman, Martin R., Boly, Melanie, Davis, Matthew H., Laureys, Steven, and Pickard, John D.. 2006. “Detecting Awareness in the Vegetative State.” Science 313 (5792):1402. https://doi.org/10.1126/science.1130197.Google Scholar
Owen, Matthew, Huang, Zirui, Duclos, Catherine, Lavazza, Andrea, Grasso, Matteo, and Hudetz, Anthony G.. 2023. “Theoretical Neurobiology of Consciousness Applied to Human Cerebral Organoids.” Cambridge Quarterly of Healthcare Ethics 33 (4):473–93. https://doi.org/10.1017/S0963180123000543.Google Scholar
Pargetter, Robert. 1984. “The Scientific Inference to Other Minds.” Australasian Journal of Philosophy 62 (2):158–63. https://doi.org/10.1080/00048408412341341.Google Scholar
Passos-Ferreira, Claudia. 2023. “Are Infants Conscious?Philosophical Perspectives 37 (1):308–29. https://doi.org/10.1111/phpe.12192.Google Scholar
Passos-Ferreira, Claudia. 2024. “Can We Detect Consciousness in Newborn Infants?Neuron 112 (10):1520–23. https://doi.org/10.1016/j.neuron.2024.04.024.Google Scholar
Prasetya, Yunus. 2021. “Which Models of Scientific Explanation Are (In)compatible with IBE?British Journal for the Philosophy of Science 75 (1). https://doi.org/10.1086/715203.Google Scholar
Psillos, Stathis. 2002. “Simply the Best: A Case for Abduction.” In Computational Logic: Logic Programming and Beyond—Essays in Honour of Robert A. Kowalski, part II, 8393. Springer.Google Scholar
Psillos, Stathis. 2007. “The Fine Structure of Inference to the Best Explanation.” Philosophy and Phenomenological Research 74 (2):441–48. https://doi.org/10.1111/j.1933-1592.2007.00030.x.Google Scholar
Sackur, Jérôme, and Dehaene, Stanislas. 2009. “The Cognitive Architecture for Chaining of Two Mental Operations.” Cognition 111 (2):187211. https://doi.org/10.1016/j.cognition.2009.01.010.Google Scholar
Schneider, Susan. 2019. Artificial You: AI and the Future of Your Mind. Princeton University Press.Google Scholar
Schurz, G. 2008. “Patterns of Abduction.” Synthese 164 (2):201–34. https://doi.org/10.1007/s11229-007-9223-4.Google Scholar
Seth, Anil K. 2024. “Conscious Artificial Intelligence and Biological Naturalism.” PsyArXiv. https://doi.org/10.31234/osf.io/tz6an.Google Scholar
Seth, Anil K., and Bayne, Tim. 2022. “Theories of Consciousness.” Nature Reviews Neuroscience 23:439–52. https://doi.org/10.1038/s41583-022-00587-4.Google Scholar
Shea, Nicholas. 2012. “Methodological Encounters with the Phenomenal Kind.” Philosophy and Phenomenological Research 84 (2):307–44. https://doi.org/10.1111/j.1933-1592.2010.00483.x.Google Scholar
Shea, Nicholas, and Bayne, Tim. 2010. “The Vegetative State and the Science of Consciousness.” British Journal for the Philosophy of Science 61 (3):459–84. https://doi.org/10.1093/bjps/axp046.Google Scholar
Shepherd, J. 2018. Consciousness and Moral Status. Taylor and Francis.Google Scholar
Shevlin, Henry. 2021. “Non-Human Consciousness and the Specificity Problem: A Modest Theoretical Proposal.” Mind and Language 36 (2):297314. https://doi.org/10.1111/mila.12338.Google Scholar
Shiller, Derek. 2024. “Functionalism, Integrity, and Digital Consciousness.” Synthese 203 (2):47. https://doi.org/10.1007/s11229-023-04473-z.Google Scholar
Siewert, Charles P. 1998. The Significance of Consciousness. Princeton University Press.Google Scholar
Sills, Jennifer, Carter, Olivia, Hohwy, Jakob, van Boxtel, Jeroen, Lamme, Victor, Block, Ned, Koch, Christof, and Tsuchiya, Naotsugu. 2018. “Conscious Machines: Defining Questions.” Science 359 (6374):400. https://doi.org/10.1126/science.aar4163.Google Scholar
Sober, Elliott. 2000. “Evolution and the Problem of Other Minds.” Journal of Philosophy 97 (7):365–86. https://doi.org/10.2307/2678410.Google Scholar
Steel, Daniel. 2007. Across the Boundaries: Extrapolation in Biology and Social Science. Oxford University Press.Google Scholar
Steel, Daniel. 2010. “A New Approach to Argument by Analogy: Extrapolation and Chain Graphs.” Philosophy of Science 77 (5):1058–69. https://doi.org/10.1086/656543.Google Scholar
Thagard, Paul. 1988. Computational Philosophy of Science. MIT Press.Google Scholar
Tononi, Giulio, Boly, Melanie, Massimini, Marcello, and Koch, Christof. 2016. “Integrated Information Theory: From Consciousness to Its Physical Substrate.” Nature Reviews Neuroscience 17 (7):450–61. https://doi.org/10.1038/nrn.2016.44.Google Scholar
Tye, Michael. 2017. Tense Bees and Shell-Shocked Crabs: Are Animals Conscious? Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190278014.001.0001.Google Scholar
Usher, Marius, Negro, Niccolò, Jacobson, Hilla, and Tsuchiya, Naotsuga. 2023. “When Philosophical Nuance Matters: Safeguarding Consciousness Research from Restrictive Assumptions.” Frontiers in Psychology 14:1306023. https://doi.org/10.3389/fpsyg.2023.1306023.Google Scholar
Veit, Walter. 2022. “Towards a Comparative Study of Animal Consciousness.” Biological Theory 17 (4):292303. https://doi.org/10.1007/s13752-022-00409-x.Google Scholar
Yaron, Itay, Melloni, Lucia, Pitts, Michael, and Mudrik, Liad. 2022. “The ConTraSt Database for Analysing and Comparing Empirical Studies of Consciousness Theories.” Nature Human Behaviour 6:563604. https://doi.org/10.1038/s41562-021-01284-5.Google Scholar
Figure 0

Table 1. Key papers on the possibility of consciousness in nonstandard systems