Skip to main content Accessibility help
×
Hostname: page-component-7857688df4-6b9td Total loading time: 0 Render date: 2025-11-16T00:14:25.776Z Has data issue: false hasContentIssue false

7 - Carebots

Gender, Empire, and the Capacity to Dissent

from Part II - Living the Digital Life

Published online by Cambridge University Press:  11 November 2025

Beate Roessler
Affiliation:
University of Amsterdam
Valerie Steeves
Affiliation:
University of Ottawa

Summary

Georas analyzes different dilemmas that arise when we use robots to serve humans living in the digital age. She focuses on the design and deployment of carebots in particular, to explore how they are embedded in more general multifaceted material and discursive configurations, and how they are implicated in the construction of humanness in socio-technical spaces. In doing so, she delves into the "fog of technology," arguing that this fog is always also a fog of inequality since the emerging architectures of our digitized lives will connect with pre-existing forms of domination. In this context, resistive struggles are premised upon our capacity to dissent, which is what ultimately enables us to express our humanity and at the same time makes us unpredictable. What it means to be human in the digital world is thus never fixed, but, Georas argues, must always be strategically reinvented and reclaimed, since there always will be people living on the “wrong side of the digital train tracks” who will be unjustly treated.

Information

Type
Chapter
Information
Being Human in the Digital World
Interdisciplinary Perspectives
, pp. 100 - 115
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

7 Carebots Gender, Empire, and the Capacity to Dissent

In this chapter I analyze different dilemmas regarding the use of robots to serve humans living in the digital age. I go beyond technical fields of knowledge to address how the design and deployment of carebots is embedded in multifaceted material and discursive configurations implicated in the construction of humanness in socio-technical spaces. Imagining those spaces necessarily entails navigating the “fog of technology,” which is always also a fog of inequality in terms of trying to decipher how the emerging architectures of our digitized lives will interface with pre-existing forms of domination and struggles of resistance premised upon our capacity to dissent. Ultimately, I contend that the absence of a “human nature” makes us human and that absence in turn makes us unpredictable. What it means to be human is thus never a fixed essence but rather must be strategically and empathically reinvented, renamed, and reclaimed, especially for the sake of those on the wrong side of the digital train tracks.

In Section 7.1, I open the discussion by critiquing Mori’s (1970) seminal theory on robot design, called the “uncanny valley,” by inscripting technologies in changing cultural practices and emergent forms of life. Section 7.2, through visual culture, gender, and race theories, sheds light on how the design of carebots can materialize complex dilemmas. In Section 7.3, I dissect Petersen’s (Reference Petersen2007, Reference Petersen, Lin, Abney and Bekey2011) perturbing theory and ethical defense of designing happy artificial people that “passionately” desire to serve. In the final Section 7.4, I offer some final thoughts on what I call the Carebot Industrial Complex, namely, the collective warehousing of aging people in automated facilities populated by carebots.

7.1 How One Person’s Uncanny Valley Can Be Another’s Comfort Zone: Inscripting Technologies in Changing Cultural Practices and Emergent Forms of Life

In recent years we have witnessed an increased interest in robots for the elderly – variously called service, nursing, or domestic robots – which are touted as a solution to the growing challenges of and demand for elder care. One of the main arguments deployed to justify the development of service robots for the elderly is the use of digital technology to empower the elderly by way of greater autonomy and extended independent living. The key global players in the supply of service robots are Europe (47%), North America (27%), and, the fastest growing market, Asia (25%) (IFR 2022b, paragraph 13). The financial stakes are huge given that the service robotics market was valued at USD 60.16 billion in 2024 and is expected to reach USD 146.79 billion by 2029 and grow at a compound annual growth rate of 19.53% over a forecast period of 2024 to 2029 (Mordor Intelligence 2024, paragraph 1).

Despite the “rosy” arguments in favor of delegating the care of elderly people to robots, there are crucial questions concerning the development of service robots that remain unanswered precisely because most of the literature on service robots has thus far been articulated within technical fields of knowledge such as engineering and the like. As part of addressing some of the thornier questions concerning the design of robots to serve the people living in the digital society, in this first section I open the discussion by critiquing Mori’s seminal theory on how humans respond to robotic design.

Mori proposes that, when robotic design becomes too human-like, hyper real or familiar, it invokes a sense of discomfort in humans, which he describes as the uncanny valley (Mori Reference Mori2012 [1970]; on the uncanny valley see also Chapter 3, by Roessler). He makes reference to the shaking of a prosthetic hand that, due to its apparent realness, surprises “by its limp boneless grip together with its texture and coldness” and, if human-like movements are added to the prosthetic hand, the uncanniness is further compounded (Mori Reference Mori2012 [1970], 99). In contrast, robots that resemble humans, but are not excessively anthropomorphized, are more comforting to humans. By building an “accurate map of the uncanny valley,” Mori hopes “through robotics research we can begin to understand what makes us human [and] to create – using nonhuman designs – devices to which people can relate comfortably” (Mori Reference Mori2012 [1970], 100). Thus, in order to avoid the discomforting uncanniness of robots designed to look confusingly human, Mori calls for the emotionally reassuring qualities of robots that retain metallic and synthetic properties.

Mori explicitly limits his interpretation to empirical evidence of human behaviour that he assumes is cross-culturally constant. In this way, Mori is more interested in making a universal claim about humans than unpacking how their cultural differences may be implicated in the complex constructions of what it means to be human in the social materialities and discursivities marked by the digital turn of societies, which I consider a limitation of the theory of the uncanny valley in general.

An implicit and recurring trope of the uncanny valley is displayed in the cultural fear of what Derrida called “mechanical usurpation,” which lies in the anxiety-laden boundary between the mind and technology or:

[the] substitution of the mnemonic device for live memory, of the prosthesis for the organ [as] a perversion. It is perverse in the psychoanalytic sense (i.e. pathological). It is the perversion that inspires contemporary fears of mechanical usurpation.

(Barnet Reference Barnet2001, 219, discussing Derrida)

Consider, for instance, Mori’s (Reference Mori2012 [1970], 99–100) statement that “[i]f someone wearing the hand in a dark place shook a woman’s hand with it, the woman would assuredly shriek.”

The uncanny valley’s fear of mechanical usurpation is also analogous to how Bhabha addresses the position of colonial subjects by invoking the liminal status of the robotic as “almost the same, but not quite” to the extent that a robot’s performative act of mimicry is condemned to the impossibility of complete likeness, remaining inevitably inappropriate (Bhabha Reference Bhabha1994, 88).Footnote 1 The uncanny valley’s implicit condemnation of the effective mimicry of human characteristics by robots, subtextually associated with a sense of betrayal, dishonesty and transgression, shows how the humanized robot comes to occupy the space of the threatening “almost the same, but not quite” and invokes the cultural anxiety of “mechanical usurpation” with a sexist twist analogous to that of the white woman encountering a black man in a dark alley.

The uncanny (v)alley can thus be understood as a specific cultural disposition relative to robots rather than a natural and intrinsic reaction across the board. This leads me to my main criticism of the notion of the uncanny valley, namely, that it is premised upon a conception of the “human” as a universal given in terms of how people will react to excessively human-like robots. The uncanny valley essentializes human reactions to robots and thus cannot account for the cross-cultural and cross-historical mutations in how people can and do differentially and creatively negotiate with emerging technologies within specific discursive genealogies and institutional practices.

The work of Langdon Winner and Sherry Turkle can add further nuance to the debate over how new forms of subjectivity and ways of being enabled by digitization are impacting ethical questions raised by robotics and values embedded in the design of carebots. For Winner (Reference Winner1986), social ideas and practices throughout history have been transformed by the mediation of technology and this transformation has been marked by the continual emergence of new forms of life. This concern over new forms of life is dramatically embodied in the field of robotics, particularly carebots, which are increasingly linked to the intimate lives of children, elders, and handicapped people, and are in turn associated with the emergence of novel subjectivities.Footnote 2 As Turkle evocatively proposes, “technology proposes itself as the architect of our intimacies. These days, it suggests substitutions that put the real on the run” (Turkle Reference Turkle2011, e-book), having a potentially profound impact on how we come to understand our own humanity and the humanity of others. Computational objects “do not simply do things for us, they do things to us as people, to our ways of seeing the world, ourselves and others” (Turkle Reference Turkle2006, 347). By treating them as “relational artifacts” or “sociable robots,” we can place the focus on the production of meaning that is taking place in the human–robot interface (Turkle Reference Turkle2006) to help us better understand what it means to be human in this new and emerging socio-technical space (see also Chapter 2, by Murakami Wood). These technologies inevitably raise important questions that go beyond the determination of the “comfort zone” of humans relative to robots à la Mori. They challenge us to question the entrenched assumption that Technology (with a capital “T”) is a force of nature beyond human control to which we must adapt no matter what as it shapes the affordances and experiences of being human. As Winner presciently warns, we must unravel teleological and simplistic views of technology as guided by implacable forces beyond state and other forms of regulation (Winner Reference Winner1986). Winner calls this position “technological somnambulism” in that it “so willingly sleepwalk[s] through the process of reconstituting the conditions of human existence,” leaving many of the pivotal ethical and political questions that new technologies pose unasked (Winner Reference Winner1986, 10).

In the following sections I explore some of the quandaries raised by the embedding of carebots in our daily lives, such as how visual culture, gender, and race theories can shed light on the design of carebots;Footnote 3 Petersen’s theory and ethical defense of designing happy artificial people that “passionately” desire to serve; and the implications of what I call the Carebot Industrial Complex, namely, the collective warehousing of aging people in automated facilities populated by carebots.

7.2 Visual Culture, Gender, and Race: Is It Possible to Design “Neutral” Robots?

Visual culture, gender, and race theories have had an extensive and transdisciplinary effect on debates concerning what is means to be human within the changing historical and cultural prisms of intersectionally related forms of inequality and struggles for equitable change. In this section I explore some angles of these theories to shed light on how the design of carebots can materialize complex discursive and symbolic configurations that impinge on the construction of humanness in the existing and emerging socio-technological architectures through which we signify our lives.

Visual culture, as a mode of critical visual analysis that questions disciplinary limitations, speaks of the visual construction of the social rather than the often-mentioned notion of the social construction of the visual. It focuses on the centrality of vision and the visual world in constructing meanings, maintaining esthetic values, as well as racialized, classed, and gendered stereotypes in societies steeped in digital technologies of surveillance and marketing. Visuality itself is understood as the intersection of power with visual representation (Mirzoeff Reference Mirzoeff and Mirzoeff2002; Rogoff Reference Rogoff and Mirzoeff2002).

Feminism and the analysis visual culture mutually inform each other. Feminism, by demanding an understanding of how gender and sexual difference figure in cultural dynamics coextensively with other modes of subjectivity and subjection such as sexual orientation, race, ethnicity, and class, among others, has figured prominently in the strengths of visual culture analysis. And, in turn, “feminism has long acknowledged that visuality (the conditions of how we see and make meaning of what we see) is one of the key modes by which gender is culturally inscribed in Western culture” (Jones Reference Jones2010, 2).Footnote 4

Relative to the design of robots, their gendering occurs at the level of the material body and the discursive and semiotic fields that inscript bodies (Balsamo Reference Balsamo1997). To the extent that subject positions “carry differential meanings,” according to de Lauretis, the representation of a robot as male or female is implicated in the meaning effects of how bodies are embedded in the semiotic and discursive formations (Robertson Reference Robertson2010, 4). Interestingly, however, Robertson contends that robots conflate bodies and genders:

The point to remember here is that the relationship between human bodies and genders is contingent. Whereas human female and male bodies are distinguished by a great deal of variability, humanoid robot bodies are effectively used as platforms for reducing the relationship between bodies and genders from a contingent relationship to a fixed and necessary one.

Because the way “robot-makers gender their humanoids is a tangible manifestation of their tacit understanding of femininity in relation to masculinity, and vice versa” (Robertson Reference Robertson2010, 4), roboticists are entrenching their reified common-sense knowledge of gender and re-enact pre-existing sexist tropes and dominant stereotypes of gendered bodies without any critical engagement. Thus, despite the lack of physical genitalia, robots possess “cultural genitals” that invoke “gender, such as pink or grey lips” (Robertson Reference Robertson2010, 5).

This process of reification in the design of robots entrenches the mythological bubble of the “natural” in the context of sex/gender and male/female binaries that queer theorist Judith Butler bursts open very effectively by looking at the malleability of the body. The traditional feminist assumptions of gender as social and cultural and sex as biological and physical are recast by Butler, who contends that sex is culturally and discursively produced as pre-discursive. Sex is an illusion produced by gender rather than the stable bedrock of variable gender constructions.

Butler develops a theory of performativity wherein gender is an act, and the doer is a performer expressed in, not sitting causally “behind,” the deed. Performance reverses the traditional relation of identity as preceding expression in favor of performance as producing identity (Butler Reference Butler1990, Reference Butler1993). Gender for Butler thus becomes all drag and the “natural” becomes performative rather than an expression of something “pre-social” and “real.” Gender performances are not an expression of an underlying true identity, but the effects of regulatory fictions where individual “choice” is mediated by relations and discourses of power. In this way, queer theory’s contingent and fluid pluralization of gendered and sexual human practices stands in stark contrast to the fixed conflation of bodies and genders of humanoid robots posed by Robertson.

Considering these debates, is it possible to make gender-neutral robot designs? Even if designers purport to create gender-neutral robots, the robots will inevitably be re-gendered by the people who use them because of the pervasiveness of sexist/stereotyped tropes of femininity and masculinity in society. Re-gendering can occur, for instance, at the obvious level of languages such as romance languages that linguistically gender all things (e.g. from animate to inanimate, human to non-human) into either female or masculine nouns. Insofar as we conceive language as not merely an instrument or means of communication, but rather as constitutive of the very “reality” of which it speaks, we must contend with the gendering of even purportedly gender-neutral incarnations of robots. Moreover, in addition to language-related forms of gendering, it can also occur at the level of the activities performed by the service robots given that they can be culturally and discursively associated with female activities. In the case of caretaking functions that have historically been associated with female and poor labor, it would be unsurprising to see the replication of gendered stereotypes as applied to gender-neutral robots. As a result, any purported semiotic neutrality of robotic design is inevitably re-inscribed in discursive and cultural fields mined with tropes steeped in long histories of sexist and gendered stereotypes.

The raced and classed cultural tensions invoked by foreign caretakers provide further insight concerning the design of “neutral” service robots. Robertson discusses how elderly Japanese people prefer robots to the “sociocultural anxieties provoked by foreign labourers and caretakers” (Robertson Reference Robertson2010, 9). The Japanese perceived that robots did not have the historical baggage and cultural differences of migrant and foreign workers, which ultimately “reinforces the tenacious ideology of ethnic homogeneity” (Robertson Reference Robertson2010, 9). Here the purported neutrality of robots as opposed to racialized and discriminated human caretakers becomes a form of racist erasure and complicit celebration of sameness. Thus, an allegedly semiotically neutral robot design can be culturally embedded in a digitized society in highly contentious ways, simultaneously enacting processes of cultural re-gendering (associated with comforting stereotypes of female care) and racist erasure (associated with the discomforting use of stigmatized minorities in caretaking). Claims to neutrality in robot design are ultimately a discourse of power that must be dealt with cautiously in order to engage with how it reconfigures the positions of people within the techno-social imaginary of the emerging digital society.

The design of carebots must engage more self-reflexively with the problematic replication of gendered, racialized, and other stereotypes in robots, especially given the depth of the gendering and racializing process that can occur even when the objects are apparently gender-neutral and lack metallic genitalia and melanin in terms of their outward design. This suggests we have to think carefully about the ways in which digital technologies may entrench and deepen the lived experiences of both privilege and marginalization.

7.3 Happy Service Robots: Worse than Slavery?

After having discussed visual culture, gender, and race in relation to the design of robots, I now turn to explore the potential impact of the instrumentalization of robots who provide care for humans. Of particular interest in this section is the position that claims that it is possible to produce robots that are designed and built to happily work in service for humans and I will focus on the work of Steve Petersen as a foil to explore the implications of this claim.

There is a wide range of opinions concerning the ethics of service robots and whether or not they are considered ethical subjects. Levy believes that robots should be treated ethically, irrespective of whether they are ethical subjects, in order to avoid sending the wrong message to society, namely that treating robots unethically will make it “acceptable to treat humans in the same ethically suspect ways” (Levy Reference Levy2009, 215). In contrast, Torrance defends robot servitude, analogous to a kitchen appliance, because they are seen as incapable of being ethical subjects (Torrance Reference Torrance2008).

Petersen (Reference Petersen, Lin, Abney and Bekey2011), however, denaturalizes the status of the “natural” regarding robots and counters that a service robot can have full ethical standing as a person, irrespective of whether the artificial person is carbon or computationally-based. The important insight that personhood “does not seem to require being made of the particular material that happens to constitute humans,” but rather “complicated organizational patterns that the material happens to realize,” however, is combined with the much more controversial contention that it is nonetheless “ethical to commission them for performing tasks that we find tiresome or downright unpleasant” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 248)Footnote 5 if they are hardwired to willingly desire to perform their tasks.

In a nutshell, I think the combination is possible because APs [Artificial Persons] could have hardwired desires radically different from our own. Thanks to the design of evolution, we humans get our reward rush of neurotransmitters from consuming a fine meal, or consummating a fine romance – or, less cynically perhaps, from cajoling an infant into a smile. If we are clever, we could design APs to get their comparable reward rush instead from the look and smell of freshly cleaned and folded laundry, or from driving passengers on safe and efficient routes to specified destinations, or from overseeing a well-maintained and environmentally friendly sewage facility. … It is hard to find anything wrong with bringing about such APs and letting them freely pursue their passions, even if those pursuits happen to serve us. This is the kind of robot servitude I have in mind, at any rate; if your conception of servitude requires some component of unpleasantness for the servant, then I can only say that is not the sense I wish to defend.

Petersen adds that the preferences hardwired into carebots could remain indecipherable to humans.

Robots … could well prefer things that are mysterious to us. Just as the things we (genuinely, rationally) want are largely determined by our design, so will the things the robot (genuinely, rationally) wants can be largely determined by its design.

(Petersen Reference Petersen2007, 46)

Petersen’s argument is built upon the premise of hardwiring robots to feel desires that impassion them toward fulfilling work that humans find unpleasant. And he believes this is analogous to the (“naturally”-produced) hardwiring of humans given that in the “carbon-based AP [artificial person] … the resulting beings would have to be no more ‘programmed’ than we are” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 285).

The crux of Petersen’s position is that the robots freely choose to serve and thus do not violate the Kantian anti-instrumentalization principle of using a person as a mere means to an end.

The “mere” use as means here is crucial. … [T]he task-specific APs: though they are a means to our ends of clean laundry and the like, they are simultaneously pursuing their own permissible ends in the process. They therefore are not being used as a mere means, and this makes all the ethical difference. By hypothesis, they want to do these things, and we are happy to let them.

By claiming that insofar as an artificial person is a “willing servant, … we can design people to serve us without thereby wronging them” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 289), Petersen counters, first, the critique of creating a “caste” of people, particularly in the case of robots that do menial labors, and, second, the critique of designed servitude leading to “happy slaves” that have been paternalistically deprived of the possibility of doing something else with their lives (Walker Reference Walker2007).

Although Petersen defends the right of artificial persons to do otherwise, that is, not to serve, he believes that reasoning “themselves out of their predisposed inclinations [is as] unlikely as our reasoning ourselves out of eating and sex, given the great pleasure the APs derive from their tasks …” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 292). Petersen’s caveat of accepting dissent, however, does not square with the premises of his theory. Ultimately, the caveat/exception does not legitimate the rule of hardwiring sentient submission but rather operates as a malfunction within a structural argument that is in favor of the hardwired design of “dissent-less” carebots, which is unethical from the start. This shows how Petersen’s writing is premised upon a strongly deterministic conception of behaviour as pre-determined by the hardwiring of living systems, be they organic or non-organic, and, as such, service robots are highly unlikely to finagle their way around their programmed “instinctive” impulses. In this way, despite Petersen’s valuable denaturalization of the status of the “natural” in terms of the distinction between human and robot, he re-entrenches it once again in his simplistic conception of a pre-deterministic causality from gene/hardware to behaviour. For Petersen, treating artificial people ethically entails “respecting their ends, encouraging their flourishing” by “permitting them to do laundry” because “[i]t is not ordinarily cruel or ‘ethically suspect’ to let people do what they want” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 294). Precisely because they desire to serve, he contends that they must be distinguished from the institution of slavery. The inadequacy of the slave metaphor is such that, for Petersen, it would be irrational to preclude pushing buttons to custom design your artificial person who loves and desires to serve you because it could be an act of discrimination tantamount to the worst episodes of human history.

The track record of such gut reactions throughout human history is just too poor, and they seem to work worst when confronted with things not like “us” – due to skin color or religion or sexual orientation or what have you. Strangely enough, the feeling that it would be wrong to push one of the buttons above may be just another instance of the exact same phenomenon.

I find Petersen’s proposal of the happy sentient servant programmed to passionately desire servitude highly disturbing and problematic because, as I analyze and argue in this section, if materialized in future techno-social configurations of society, it would automate, reify, and legitimate the dissent-less submission of purportedly willing and happy swaths of sentient artificial humans and entrench hierarchical structures of oppression as natural divisions among artificial and non-artificial humans.

As part of setting out my analysis, I propose that robots can be historical in two senses, namely, as objects or subjects, although as objects they are historical in a much more limited sense than as subjects. Robots as historical objects are robots without sentience or self-awareness that are historical because of the values embedded in their design that are specifically situated in time and space. Furthermore, robots as objects are products of cultural translation within the technological frames and languages available at the time to materialize their functions. In contrast, robots as historical subjects are premised upon emergent forms of sentience and subjectivity that are analogous to those of “non-artificial” humans.

A digitized future in which we accept Petersen’s conception of sentient servitude as ethical through embedding hardwired desires to serve and be obedient could create a future in which we automate what Pierre Bourdieu calls symbolic violence. The work of Bourdieu offers a sophisticated theory to address how societies reproduce their structures of domination. Symbolic capital or cultural values are central to the processes of legitimizing structures of domination. Said cultural values are reified or presented as universal but are in fact historically and politically contingent or arbitrary. Of all the manners of “‘hidden persuasion,’ the most implacable is the one exerted, quite simply, by the order of things” (Bourdieu and Wacquant Reference Bourdieu, Wacquant, Scheper-Hughes and Bourgois2004, 272). Programming sentient servants to be happy with their servitude points to the automation of symbolic violence through the deployment of desire and pleasure to subjugate artificial persons into unquestioning servitude while being depicted as having “freely” chosen to wash laundry in saecula saeculorum. If we assume that their desire by design is successfully engineered to avoid wanting any other destiny than that of servitude, the symbolic violence of artificial persons à la Petersen lies in how the hardwired happiness becomes an embedded structure of the relation of domination and thus reifies their compliance with the status quo.

Petersen claims that creating sentient servers who enjoy their labors is an advance over histories of racism, sexism, colonialism, and imperialism. I contend, however, that Petersen’s Person-O-Matic culminates the imperial fantasy of biological determinism where you can have an intrinsically obedient population whose lesser sentience is engineered to feel grateful and happy to serve your needs. And I differ even further: It is not that Petersen’s artificial beings designed to serve are simply analogous to slaves, but actually they are worse than slaves because they are trapped in a programmed/hard-wired state of dissent-less and ecstatic submission.

The domination of humans by humans along the axes of colonial forms of hierarchizing the relationship of the civilized colonizer vis-à-vis the barbarian other, either seen as in need of being civilized or inevitably trapped in cultural and/or biological incommensurableness, always had to contend with the capacity of dissent or the native excesses that collapsed the discursive strictures of colonial otherness and destabilized the neat hierarchies imposed by the imperial discourses. Empires had to deal with the massive failure of their fantasies regarding the imaginary subservience and inferiority of others, but a digital future built on Petersen’s model is actually much more violent because desire by design hardwires pleasure in serving the masters. It basically precludes dissent, either because their lesser sentience has been designed effectively or because the dissent is a highly remote possibility. Hence the combination of limited sentience marked by a programmed incapacity to dissent poses the uncomfortable technological culmination of the fantasies of biological determinism that took definite shape as part of colonial endeavors.

An especially fascinating, powerful, and distinctive aspect of Petersen’s model is the commodification of custom designed servitude. Rather than the vision of the colonial administration of an empire, it is a consumerist vision of individuals who go to a vending machine to custom design a slave, an intimate subjectivity of empire for John Doe, who can now have his own little colony of sentient servers to “orgasmically” launder his clothes, presumably among many other duties.

The notions available for understanding and evaluating humanness in the digital future are impoverished by this kind of theorization. For Petersen, sentience is premised upon the capacity of the human to act upon one’s desires, which are hardwired into the robot. Desire here is a highly reductive and deterministic conception of desire by design. He articulates a linear theory of causality from design to desire, systemically hardwired to produce servants who happily do laundry. As already addressed, despite Petersen’s contention that the artificial persons that emerge from the Person-O-Matic act freely when choosing to serve, the fact that their design, if it is successful, precludes the possibility of not wanting to serve raises serious doubts as to whether the robots are actually exercising their free will to do their labors. This resonates with Murakami Woods’ critique, in Chapter 2, of digital imaginaries that nudge humanness into commercially profitable and instrumental boxes.

One of the central problems of Petersen’s conception of robot sentience – like Murakami Wood’s smart city denizen – is precisely that it is trapped within a narrow understanding of self-determination. This conception does not consider the social and historical constitution of subjectivities within cultural parameters that vary and produce contingent desires that are not reducible to the underlying hardwiring (genetic or otherwise). Desire is always multifaceted and leads to unintended consequences, always in excess, incomprehensible even for the subject that desires. Hence, once an artificial being acquires an emoting sentience of some sort, engineering identities is not a case of linear causalities that follow an imaginary teleology of desire in a genetic or computational fantasy of hardwiring. Hardwiring, genetic/computational engineering, and natural selection are the names of Petersen’s “game” and, as a result, Petersen is not engaging with the social production and cultural inscription of sentient robot subjectivities and, accordingly, he occludes the complex and contradictory process of interactive, mutually constitutive forms of sociability set out in Chapter 8, by Steeves.

An emoting sentience implies that robots become social and cultural subjects, not just objects inscribed with pre-existing values in their design. They become subjects who can engage with and resist the constraints of their architecture as well as those of the discursive and semiotic fields of signification within which they emote and represent their location within societies. Rather than define robots as human because they fulfill their hardwiring of desiring to serve, we can say that robots become “human” when they escape the constraints of their imaginary hardwiring, when there is an excess of desire that is not reducible to their programming. Desire, pleasure, and pain are ghosts in the hardwiring of machines. And it is the ghosts that make the robot “human” as a form of post-mortem ensoulment enabling a human-like sentience marked by contradiction and excess. Irreducibility makes human. Unpredictability makes human. Dissent makes human. The absence of a “human nature” makes human.

Although it is still technically impossible to create robots with a fully human-like sentience, the production of emoting robots with limited forms of sentience may not be such a remote possibility. For Petersen, artificial humans will be more trapped within their hardwiring than “non-artificial” humans and, thus, the exercise of dissent from the structural constraints of their design will be highly remote, if not precluded altogether. His defense of the ethical legitimacy of the artificial person that serves “passionately” raises very difficult questions that must be teased out. Is the production of a lesser sentient dissent-less robot worse than the production of a fully sentient robot capable of dissent? Should there be an ethical preclusion of a lesser sentience in the design of emoting robots? Should society err on the side of no sentience in order to avoid the perverse politics that underlies the design of terminally happy sentient beings incapable of dissent, that is, the ideal servant incapable of questioning the “ironic ramifications of [his/her/its] happiness”?Footnote 6

I contend that either no sentience or full sentience is more ethical than computationally or biologically wiring docility into an emoting and desiring being with a lesser sentience. When robots cease to be things to become sentient beings motivated by desires, pleasures, and happiness, the incapacity to dissent should not be a negotiable part of the design but rather should remain precluded. It seems much less unethical to create fully sentient robots with the capacity to dissent than to create a permanent underclass of unquestioningly obedient limited life forms motivated by passionate desires to serve.

Therefore, the techno-social imaginary of emerging digital societies must explicitly condemn the symbolic violence of automating the incapacity to dissent of artificial humans designed to happily launder the underwear of non-artificial humans. The ethical defense of the right to dissent in the design of emoting sentient beings crucially avoids creating the conditions for dystopic new forms of domination under a normalizing discursive smokescreen of the “order of things.”

7.4 The Carebot Industrial Complex: Some Final Thoughts

Although I do not underestimate what carebots could mean for elderly people, in this final section I raise some final thoughts about what I call the Carebot Industrial Complex, namely, the collective warehousing of aging populations in automated facilities populated by carebots.

For Latour, the relationship of people to machines is not reducible to the sum of its parts but rather adds up to a complex emergent agency. Beyond Winner’s concern over the embedding of social values in technologies, Latour deploys the notion of the delegation of humanness into technologies. Technological fixes can thus deskill people in a moral sense (Latour Reference Latour, Bijker and Law1992). This process of deskilling acquires special relevance in the context of using carebots and how it can undermine human learning and development acquired in caregiving settings. Of special relevance here is Vallor’s work on the ethical implications of carebots in terms of a dimension that has been ignored within the literature, that is, “the potential moral value of caregiving practices for caregivers” (Vallor Reference Vallor2011, 251). By examining the goods internal to caring practices, she attempts to “shed new light on the contexts in which carebots might deprive potential caregivers of important moral goods central to caring practices, as well as those contexts in which carebots might help caregivers sustain or even enrich those practices, and their attendant goods” (Vallor Reference Vallor2011, 251).

The Industrial Carebot Complex can deprive people living in the digital age of the moral value of caregiving practices for human caregivers and, in the process simultaneously, stigmatize the decaying bodies of the elderly, subject to the disciplinary effects of a shifting built environment and intimate technology of carebots, whose name exudes the oxymoronic concern of whether there can be care without caring. In addition, it can undermine the process of human intergenerational learning of caregiving skills for the elderly and stymy opportunities for emotional and social growth that occur when we act selflessly and out of concern for others.

Turkle’s arguments on the evocative instantiations of robotic pets, some of which have been deployed as part of elder care, can be extended to the broader concern over the emotional identification of elderly people with future carebots. For instance, in her fieldwork on elders’ interaction with pet robots, specifically Paro, a pet seal, Turkle asks:

But what are we to make of this transaction between a depressed woman and a robot? When I talk to colleagues and friends about such encounters – for Miriam’s story is not unusual – their first associations are usually to their pets and the solace they provide. I hear stories of how pets “know” when their owners are unhappy and need comfort. The comparison with pets sharpens the question of what it means to have a relationship with a robot. I do not know whether a pet could sense Miriam’s unhappiness, her feelings of loss. I do know that in the moment of apparent connection between Miriam and her Paro, a moment that comforted her, the robot understood nothing. Miriam experienced an intimacy with another, but she was in fact alone. Her son had left her, and as she looked to the robot, I felt that we had abandoned her as well.

One of Turkle’s central concerns is how our affection “can be bought so cheap” relative to robots that are incapable of feeling:

I mean, what does it mean to love a creature and to feel you have a relationship with a creature that really doesn’t know you’re there. I’ve interviewed a lot of people who said that, you know, in response to artificial intelligence, that, OK, simulated thinking might be thinking, but simulated feeling could never be feeling. Simulated love could never be love. And in a way, it’s important to always keep in mind that no matter how convincing, no matter how compelling, this moving, responding creature in front of you – this is simulation. And I think that it challenges us to ask ourselves what it says about us, that our affections, in a certain way, can be bought so cheap.

The potential attribution of affection to robots by a lonely and relegated population is particularly problematic and raises the specter of how the emotionless management of elderly bodies is ultimately not care. The emergent agency of the Industrial Carebot Complex can dehumanize aging populations even more dramatically than current forms of warehousing the elderly where the warmth of human hands, even those of a stranger, can make a radical difference in terms of the ethics of care experienced. Thus, the integration of carebots to elder care must never be in exclusion of human care, but rather complementary and subordinate to the moral value of human caregiving skills and practices.

In this chapter I have analyzed different dilemmas regarding the use of robots to serve humans living in the digital age. The design and deployment of carebots is inscribed in complex material and discursive landscapes that affect how we think of humanness in the socio-technological architectures through which we signify our lives. As stated at the outset of this chapter, imagining those spaces necessarily entails navigating the “fog of technology,” which is also always a fog of inequality in terms of trying to decipher how the emerging architectures of our digitized lives will interface with pre-existing forms of domination and struggles of resistance premised upon our capacity to dissent. My main contention is anti-essentialist, namely, that the absence of a “human nature” makes us human and unpredictable. There is no underlying fixed essence to being human. Instead, we should be attentive to how what it means to be human, as I said in the opening and want to repeat here, must be strategically and empathically reinvented, renamed, and reclaimed, especially for the sake of those on the wrong side of the digital train tracks.

Footnotes

1 According to Bhabha: “… colonial mimicry is the desire for a reformed recognizable other, as a subject of a difference that is almost the same, but not quite. Which is to say, that the discourse of mimicry is constructed around an ambivalence; in order to be effective, mimicry must continually produce its slippage, its excess, its difference. The authority of that mode of colonial discourse that I have called mimicry is therefore stricken by an indeterminacy: mimicry emerges as the representation of a difference that is itself a process of disavowal. Mimicry is, thus the sign of a double articulation; a complex strategy of reform, regulation and discipline, which appropriates the other as it visualizes power. Mimicry is also the sign of the inappropriate, however, a difference or recalcitrance which coheres the dominant strategic function of colonial power, intensifies surveillance, and poses an immanent threat to both ‘normalized’ knowledges and disciplinary powers” (Bhabha Reference Bhabha1994, 86).

2 The concept of subjectivity is related to the broader one of culture. Culture in current debates is considered a historically contingent repertoire that encompasses symbols, codes, values, systems of classification, and forms of perception as well as their related practices (Crane Reference Crane1994; Alexander and Seidman Reference Alexander and Seidman1990). Culture constitutes subjectivities and articulates the practices of social subjects and collectivities. The fundamental implication of a cultural analysis is that meanings are produced or constructed and not merely discovered “out there” in an essentialist or empirical sense (Hall Reference Hall1997). Both what was previously considered universal or natural are no longer viewed as essential facts of nature or positivist truths, but rather reveal themselves as social constructions and as part of specifically situated historical subjectivities.

3 My interest in this article is in humanoid adult-like robots both in appearance and emergent forms of consciousness/sentience in contrast to non-humanoid sociable robots that lack consciousness/sentience such as Paro (pet seal), Furby (hamster or owl), and AIBOs.

4 For a more extensive discussion of the relationship between visual culture, gender, race and technology, see Georas (Reference Georas and Marron2021).

5 Thus, Petersen concludes that ET may not be human, but he is a person. And the same applies to robots.

6 This phrase is from a glass coaster that pokes fun at 1950s ideals of feminine domesticity.

References

Alexander, Jeffrey C., and Seidman, Steven, eds. Culture and Society: Contemporary Debates. Cambridge: Cambridge University Press, 1990.Google Scholar
Balsamo, Anne. Technologies of the Gendered Body: Reading Cyborg Women. Durham, NC: Duke University Press, 1997.Google Scholar
Barnet, Belinda. “Pack-Rat or Amnesiac? Memory, the Archive and the Internet.” Continuum: Journal of Media & Cultural Studies 15, no. 2 (2001): 217231.CrossRefGoogle Scholar
Bhabha, Homi K. The Location of Culture. New York: Routledge, 1994.Google Scholar
Bourdieu, Pierre, and Wacquant, Loïc. “Symbolic Violence.” In Violence in War and Peace: An Anthology, edited by Scheper-Hughes, Nancy and Bourgois, Philippe, 272275. Oxford: Blackwell, 2004.Google Scholar
Butler, Judith. Bodies That Matter: On the Discursive Limits of “Sex”. New York: Routledge, 1993.Google Scholar
Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge, 1990.Google Scholar
Crane, Diane, ed. The Sociology of Culture: Emerging Theoretical Perspectives. Oxford: Blackwell, 1994.Google Scholar
Georas, Chloé S.From Sexual Explicitness to Invisibility in Resistance Art: Coloniality, Rape Culture and Technology.” In Misogyny across Global Media, edited by Marron, Maria B., 2341. Lanham, MD: Lexington Books, 2021.CrossRefGoogle Scholar
Hall, Stuart, ed. Representation: Cultural Representations and Signifying Practices. Glasgow: Sage, 1997.Google Scholar
IFR. “World Robotics 2021: Service Robots Report Released.” IFR International Federation of Robotics, accessed February 15, 2022b. https://ifr.org/ifr-press-releases/news/service-robots-hit-double-digit-growth-worldwide.Google Scholar
Jones, Amelia, ed. The Feminism and Visual Culture Reader, 2nd ed. New York: Routledge, 2010.Google Scholar
Latour, Bruno. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” In Shaping Technology/Building Society: Studies in Sociotechnical Change, edited by Bijker, Wiebe E. and Law, John, 225259. Cambridge, MA: MIT Press, 1992.Google Scholar
Levy, David. “The Ethical Treatment of Artificially Conscious Robots.” International Journal of Social Robotics 1 (2009): 209216. https://doi.org/10.1007/s12369-009-0022-6.CrossRefGoogle Scholar
Mirzoeff, Nicholas. “The Subject of Visual Culture.” In Visual Culture Reader, 2nd ed., edited by Mirzoeff, Nicholas, 323. New York: Routledge, 2002.Google Scholar
Mordor Intelligence. “Service Robotics Market | 2024–29 | Industry Share, Size, Growth: Mordor Intelligence.” Mordor Intelligence, accessed April 3, 2024. www.mordorintelligence.com/industry-reports/service-robotics-market.Google Scholar
Mori, Masahiro. “The Uncanny Valley,” translated by Karl F. MacDorman and Norri Kageki. IEEE Robotics & Automation Magazine 19, no. 2 (2012 [1970]): 98100. www.researchgate.net/publication/254060168_The_Uncanny_Valley_From_the_Field.CrossRefGoogle Scholar
Petersen, Stephen. “Designing People to Serve.” In Robot Ethics: The Ethical and Social Implications of Robotics, edited by Lin, Patrick, Abney, Keith, and Bekey, George A., Kindle ed., 283298. Cambridge, MA: MIT Press, 2011.Google Scholar
Petersen, Stephen. “The Ethics of Robot Servitude.” Journal of Experimental & Theoretical Artificial Intelligence 19, no. 1 (March 2007): 4354. https://philarchive.org/archive/PETTEO.CrossRefGoogle Scholar
Robertson, Jennifer. “Gendering Humanoid Robots: Robo-Sexism in Japan.” Body & Society 16, no. 2 (2010): 136. https://doi.org/10.1177/1357034X1036476.CrossRefGoogle Scholar
Rogoff, Irit. “Studying Visual Culture.” In Visual Culture Reader, 2nd ed., edited by Mirzoeff, Nicholas, 2436. New York: Routledge, 2002.Google Scholar
Torrance, Steve. “Ethics and Consciousness in Artificial Agents.” Artificial Intelligence & Society 22, no. 4 (2008): 495521. https://philpapers.org/rec/TOREAC-2.Google Scholar
Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011.Google Scholar
Turkle, Sherry. “Interview: MIT’s Dr. Sherry Turkle Discusses Robotic Companionship.” National Public Radio, May 11, 2001. www.proquest.com/docview/190010111?&sourcetype=Other%20Sources.Google Scholar
Turkle, Sherry. “Relational Artifacts with Children and Elders: The Complexities of Cybercompanionship.” Connection Science 18, no. 4 (2006): 347361. https://sherryturkle.mit.edu/sites/default/files/images/Relational%20Artifacts.pdfCrossRefGoogle Scholar
Vallor, Shannon. “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century.” Philosophy & Technology 24 (2011): 251268. https://link.springer.com/article/10.1007/s13347-011-0015-x.CrossRefGoogle Scholar
Walker, Mark. “Mary Poppins 3000s of the World Unite: A Moral Paradox in the Creation of Artificial Intelligence.” Institute for Ethics & Emerging Technologies, 2007. www.researchgate.net/publication/281477782_A_moral_paradox_in_the_creation_of_artificial_intelligence_Mary_popping_3000s_of_the_world_unite.Google Scholar
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of Technology. Chicago: The University of Chicago Press, 1986.Google Scholar

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×