Within a few years, machine-written language may become “the norm and human-written prose the exception” (Kirschenbaum Reference Kirschenbaum2023).Footnote 1 Generative Artificial Intelligence is now poised to create profiles on social media sites and post far more than any human can – perhaps by orders of magnitude.Footnote 2 Unscrupulous academics and public relations firms may use article-generating and -submitting artificial intelligence (AI) to spam journals and journalists. The science fiction magazine Clarkesworld closed down its open submission window in 2023 because of a deluge of content likely created by generative AI. There is already evidence of the weaponization of social media, and AI promises to supercharge it (Jankowicz Reference Jankowicz2020; Singer Reference Singer2018).
AI is also poised to play a dramatically more intimate and important role in parasocial and social relationships, displacing human influencers, entertainers, friends, and partners. Not only is technology becoming more capable of simulating human thought, will, and emotional response, but it is doing so at an inhuman pace. A mere human manipulator can only learn from a limited number of encounters and resources; algorithms can develop methods of manipulation at scale, based on the data of millions. This again affords computation, and those in control of its most advanced methods and widespread deployments, an outsized role in shaping future events, preferences, and values.
Despite such clear and present dangers, many fiction and non-fiction works gloss over the problem of artificial intelligence overpowering natural thought, feeling, and insight. They instead present robots (and even operating systems and large language models) as sympathetic and vulnerable, deserving rights and respect now accorded to humans.Footnote 3 Questioning such media representations of AI is a first step toward achieving the cultural commitments and sensibilities that will be necessary to conserve human capacities amidst the growing influence of what Lyotard (Reference Lyotard1992) deemed “the inhuman”: systems that presume and promote the separability of the body from memory, will, and emotion. What must be avoided is a drift toward an evolutionary environment where individual decisions to overvalue, over-empower, and overuse AI advance machinic and algorithmic modes of thought to the point that distinctively human and non-algorithmic values are marginalized. Literature and film can help us avoid this drift by structuring imaginative experiences which vividly crystallize and arrestingly illuminate the natural tendencies of individual decisions.Footnote 4
I begin the argument in Section 4.1 by articulating how Rachel Cusk’s (Reference Cusk2017) novel Transit and Maria Schrader’s film I’m Your Man suggest a range of ways to regard emerging AIs which simulate human expression. Each sympathetically describe a man and woman (respectively) comforted and intrigued by AI communications. Yet each work leaves no doubt that the AI and robotics it treats have done much to create the conditions of alienation and loneliness they promise to cure. Section 4.2 examines the long-term implications of such alienation, exploring works that attempt to function as a “self-preventing prophecy”: Hari Kunzru’s (Reference Kunzru2020) Red Pill and Lisa Joy and Jonathan Nolan’s Westworld. Section 4.3 concludes with reflections on the politico-economic context of professed emotional attachments to AI and robotics.
Before diving into the argument, one prefatory note is in order. The sections that follow touch upon a wide range of cultural artefacts. There are spoilers, so if you intend to read, view, or listen to one of the works discussed, without being forewarned of some critical plot twist or character development, it may be wise to stop reading when it is mentioned. Unlike computers, we cannot simply delete the spoiler from memory, and natural processes of human forgetting are notoriously unpredictable.
4.1 Curing or Capitalizing upon Alienation?
At the beginning of Rachel Cusk’s (Reference Cusk2017) novel, Transit, the narrator opens a scam email from an astrologer, or from an algorithm imitating one. The narrator describes a richly detailed, importuning missive, full of simulated sentiment. “She could sense … that I had lost my way in life, that I sometimes struggled to find meaning in my present circumstances and to feel hope for what was to come; she felt a strong personal connection between us,” (2) the narrator relates. “What the planets offer, she said, is nothing less than the chance to regain faith in the grandeur of the human: how much more dignity and honor, how much kindness and responsibility and respect, would we bring to our dealings with one another if we believed that each and every one of us had a cosmic importance?” (2).
It’s a humane sentiment, both humbling and empowering, like much else in the email. Cusk’s narrator deftly summarizes the email, rather than quoting it, giving an initial impression of the narrator’s identification with its message and author. So how did Cusk’s narrator divine the scam? After relating its contents, the narrator states that “It seemed possible that the same computer algorithms that had generated this email had also generated the astrologer herself: her phrases were too characterful, and the note of character was repeated too often; she was too obviously based on a human type to be, herself, human” (3).Footnote 5 The astrologer-algorithm’s obvious failure is an indirect acknowledgement of the author’s anxieties: what if her own fictions turn out to be too characterful? Carefully avoiding that, and many other vices, Cusk, in Transit (and the two other novels in her Outline trilogy), presents characters who are strange or unpredictable enough to surprise or enlighten us, to respond to tense scenarios with weakness or strength and to look back on themselves with defensiveness, insight, and all manner of other fusions of cognition and affect, judgement, and feeling.
One facet of Cusk’s genius is to invite readers to contemplate the oft-thin line between compassion and deception, comfort and folly. The narrator finds the algorithmic astrologer impersonator hackish but, almost as if to check herself, immediately relates the views of a friend who found solace in mechanical expressions of concern:
A friend of mine, depressed in the wake of his divorce, had recently admitted that he often felt moved to tears by the concern for his health and well-being expressed in the phraseology of adverts and food packaging, and by the automated voices on trains and buses, apparently anxious that he might miss his stop; he actually felt something akin to love, he said, for the female voice that guided him while he was driving his car, so much more devotedly than his wife ever had. There has been a great harvest, he said, of language and information from life, and it may have become the case that the faux-human was growing more substantial and more relational than the original, that there was more tenderness to be had from a machine than from one’s fellow man.
Cusk’s invocation of an “oceanic” chorus calls to mind Freud’s discussion of the “oceanic feeling” in Civilization and Its Discontents – or, more precisely, his naturalization of Romain Rolland’s metaphysical characterization of a yearned-for “oceanic feeling” of bondedness and unity with all humanity. For Freud, such a feeling is an outgrowth of infantile narcissism, an enduring desire for the boundless protection of the good parent.Footnote 6
Marking the importance of this oceanic metaphor in both style as well as substance, Cusk’s story of the astrologer’s letter has a tidal structure. Like an uplifting wave, the letter sweeps us up into reflections on fate and belief. And, like any wave, it eventually crashes down to earth, suddenly undercut by the revelation that insights once appraised as mystical or compassionate are mere fabrications of a bot. Then another rising wave of sentiment appears, wiser and more distant, calling on readers to reflect on whether they have discounted the value of bot language too quickly. The speaker is vulnerable and thoughtful: someone “depressed in the wake of his divorce,” who acknowledges that the very idea of a diffuse “oceanic chorus” of algorithmically arranged concern is “maddening” (3).
Rather than crashing, this subtler, second plea for the value of the algorithmic recedes. Cusk does not leave us rolling our eyes at this junk email. She welcomes a voice in the novel that, in a sincere if misguided way, submits to an algorithmic flow of communication, embracing corporate communication strategy as concern. Cusk refuses to dismiss the idea, or to bluntly depict it as a symptom of some pathological misapprehension of the world. Her patience is reminiscent of Sarah Manguso’s (Reference Manguso2018) apothegm: “Instead of pathologizing every human quirk, we should say: By the grace of this behaviour, this individual has found it possible to continue” (44). Weighed down by depression, savaged by loneliness, a person may well seek scraps of solace wherever they appear. There are even now persons who profess to love robots (Danaher and Macarthur Reference Danaher and Macarthur2017; Levy Reference Levy2008) or treat them with the respect due to a human. Indeed, a one-time Google engineer recently expressed his belief that a large language model offered such eerily human responses to queries that it might be sentient (Christian Reference Christian2022; Tangermann Reference Tangermann2022).
And yet there is a clue in the novel of how a Freudian hermeneutic of suspicion may be far more appropriate than a Rollandian hermeneutic of charity when interpreting whatever oceanic feeling may be afforded by bot language. Cusk includes a self-incriminating note in the divorcee’s earnest endorsement of the “oceanic chorus” of machines: the casual contrast, and implicit demand, in the phrase “he actually felt something akin to love, he said, for the female voice that guided him while he was driving his car, so much more devotedly than his wife ever had” (Cusk Reference Cusk2017, 3). A robotic voice can always sound kind, patient, devoted, or servile – whatever its controller wants from it. As the film Megan depicts, affective computing embedded in robotics will have a remarkable capacity for rapidly pivoting and refining its emotional appeals. It is not realistic to expect such relentless, data-informed support from a person, even a parent, let alone a life partner. Yet the more robotic and AI “affirmations” are taken to be sincere and meaningful, the more human deviation from such scripts will seem suspect. Like the Uber driver constantly graded against the Platonic ideal of a perfect 5-star trip, persons will be expected to mimic the machines’ perpetual affability, availability, and affirmation, whatever their actual emotional states and situational judgements.
For a behaviourist, this is no problem: what is the difference between the outward signs of kindness and patience and such virtues themselves? This is perhaps one reason why John Danaher (Reference Danaher2020, 2023) has proposed “ethical behaviourism” as a mode of “welcoming robots into the moral circle”. In this framework, there is little difference between the given and the made, the simulated and the authentic. Danaher proposes that:
1. If a robot is roughly performatively equivalent to another entity whom, it is widely agreed, has significant moral status, then it is right and proper to afford the robot that same status.
2. Robots can be roughly performatively equivalent to other entities whom, it is widely agreed, have significant moral status.
3. Therefore, it can be right and proper to afford robots significant moral status (Danaher Reference Danaher2020, 2026).
The qualifier “can” in the last line may be doing a lot of work here, denoting ample moral space to reject robots’ moral status. And yet it still seems wise to resist any attempts to blur the boundary between persons and things. The value of so much of what persons do is inextricably intertwined with their free choice to do it. Robots and AI are, by contrast, programmed. The idea of a programmed friend is as oxymoronic as that of a paid friend. Perhaps some forms of coded randomization could simulate free choice via AI. But they must be strictly limited. If robots were to truly possess something like the deep free will that is a prerogative of humans – the ability to question and reconfigure any optimization function they were originally programmed with – they would be far too dangerous to permit. They would pose all the threats now presented by malevolent humans but would not be subject to the types of deterrence honed in centuries of criminal law based on human behaviour (and even now very poorly adapted to corporations).
Unconvincing in their efforts to characterize robots as moral agents, behaviourists might then try to characterize robots and AI as moral patients, like a baby or harmless animal which deserves our regard and support. Nevertheless, the programming problem still holds: a robotic doll that cries to, say, demand a battery recharge, could be programmed not to do so; indeed, it could just as plausibly convey anticipated pleasure at the “rest” afforded by time spent switched off. For such entities, emotion and communication have in stricto sensu no meaning whatsoever. Their “expression” is operational, functional, or, in Dan Burk’s (Reference Burk2025) apt characterization, “asemic” (189).
To be sure, humans are all to some extent “programmed” by their families, culture, workplaces, and other institutions. Free will is never absolute. But a critical part of human autonomy consists in the ability to reflect upon and revise such values, commitments, and habits, based on the sensations, thoughts, and texts that are respectively felt, developed, and interpreted through life. The ethical behaviourist may, in turn, point out that a robot equipped with a connection to ChatGPT’s servers may be able to “process” millions more texts than a human could read in several lifetimes, and say or write texts that we would frequently accept as evidence of thought in humans. Nevertheless, the lack of sensation motivating both perception and affect remains, and it is hard to imagine a transducer capable of overcoming it (Pasquale Reference Pasquale2002). More importantly, robot “thoughts” produced via current generative AI are far from human ones, as they are mere next-word or next-pixel predictions.
Consider also the untoward implications of ethical behaviourism if persons and polities try to back their professed moral regard for robots and AIs with concrete ethical decisions and commitments of resources. If a driver must choose between running over a robot and a child, should they really worry about choosing the former? (Birhane et al. Reference Birhane, van Dijk and Pasquale2024). If behaviour, including speech, is all that matters, are humans under some moral obligation to promote “self-reports” or other evidence of well-being by AI and robots? In some accelerationist and transhumanist circles, the ultimate purpose and destiny of humans is to “populate” galaxies with as many “happy” simulations or emulations of human minds as possible.Footnote 7 On this utilitarian framework, what matters is happiness, as verified behaviouristically: if a machine “says” it is happy, we are to take it at its word. But such a teleology is widely recognized as absurd, especially given the pressing problems now confronting so many persons on earth.
While often portrayed as a cosmopolitan openness to the value of computers and AI, the embrace of robots as deserving of moral regard is more accurately styled as part of a suite of ideologies legitimating radical and controversial societal reordering. As Timnit Gebru and Emile Torres (Reference Gebru and Torres2024) have explained, there is a close connection between Silicon Valley’s accelerationist visions and a bundle of ideologies (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism) which they abbreviate as TESCREAL. Once ideologies like transhumanism and singularitarianism have breached the boundary between persons’ and computers’ well-being (again assuming that the idea of computer well-being makes any more sense than, say, toaster well-being), long-term policy may well include and prioritize the development of particularly powerful and prevalent computation (such as “artificial general intelligence” or “superintelligence”) over human well-being, just as some humans are inevitably helped more than others by any given policy. An abstract utilitarian meta-ethical stance, already far more open to wildly variant futures than more grounded virtue-oriented, natural law, and deontological approaches, becomes completely open-ended once the welfare of humans fails to be the fixed point of its individualistic, maximizing, consequentialism.
Ethical behaviourism also reflects a rather naïve political economy of AI and robotics.
A GPS system’s simulation of kindness is far less a mechanization of compassion (if such a conversion of human emotion into mechanical action can even be imagined), than a corporate calculation to instil brand loyalty. Perhaps humans can learn something from emotion AI designed to soothe, support, and entertain.Footnote 8 But the more such emotional states or manners are faked or forced, the more they become an operational mode of navigating the world, rather than an expression of one’s own feelings. Skill degradation is one predictable consequence of many forms of automation; pilots, for example, may forget how to fly a plane manually if they over rely on autopilot. Skill degradation in the realm of feeling, or articulating one’s feelings, is a troubling fate, foreshadowing a mechanization of selfhood, outsourced to the algorithms that tell a person what or how to feel (Pasquale Reference Pasquale2015). Allison Pugh expertly anticipates the danger of efforts to automate both emotional and connective labour, given the sense of meaning and dignity that such work confers on both givers and receivers of care and concern (Pugh Reference Pugh2024).
The entertaining and intellectually stimulating German film I’m Your Man (2021), directed by Maria Schrader, explores themes of authentic and programmed feeling as its protagonist (an archaeologist named Emma) questions the blandishments of the handsome robotic companion (Tom) whom she agrees to “test out” for a firm. Tom can “converse” with her about her work, anticipate her needs and wants, and simulate concern, respect, friendship, and love.Footnote 9 The robot is also exceptionally intelligent, finding an obscure but vital academic reference that upends one of Emma’s research programs. Emma occasionally enjoys the attention and expertise that Tom provides and tries to reciprocate. But she ultimately realizes that what Tom is offering is programmed, not a free choice, and is thus fundamentally different than the risk and reward inherent in true human companionship and love.
Emma realizes that, even if no one else knew Tom’s nature, her ongoing engagement with it would be dangerous on not only an affective, but also on an epistemic level.Footnote 10 As Charles Taylor (Reference Taylor1985b, 49) has explained, “experiencing a given emotion involves experiencing our situation as bearing a certain import, where for the ascription of the import it is not sufficient just that I feel this way, but rather the import gives grounds or basis for the feeling.”Footnote 11 Simply feeling a need for affirmation is not a solid ground or basis for someone else to express affirming emotions. Barring extreme situations of emotional fragility, the other needs to be able to independently decide whether to affirm oneself for that affirmation to have meaning. If simulated expression of such emotions by a thing is done, as is likely, to advance the commercial interest of the thing’s owner, there is no solid basis for feeling affirmed either. We can all go from the “wooed” to the “waste” (in Joseph Turow’s memorable phrasing) of a firm in the flash of business model shift. Of course, we can also imagine a world in which “haphazardly attached” persons find some solace in the words emitted by LLMs, whatever their nature.Footnote 12 But the way such technology fits or functions in such a scenario is far more an indictment (and, ironically, stabilization) of its alienating environment, than testament to its own excellence or value. As Rob Horning has observed, from an economic perspective, large technology firms “must prefer the relative predictability of selling simulations to the uncontrollable chaos of selling social connection. They would prefer that we interact with generated friends in generated worlds, which they can engineer entirely to suit their ends” (Horning Reference Horning2024).
While many advocates of “artificial friends” based on affective computing claim that they will alleviate alienation, they are more likely to do the opposite: lure the vulnerable away from truly restorative, meaningful, and resonant human relationships, and into a virtual world. As Sherry Turkle has observed:
[chatbots] haven’t lived a human life. They don’t have bodies and they don’t fear illness and death … AI doesn’t care in the way humans use the word care, and AI doesn’t care about the outcome of the conversation … To put it bluntly, if you turn away to make dinner or attempt suicide, it’s all the same to them.
Like the oxymoronic “virtual reality” of Ready Player One, the oxymoronic “artificial empathy” of an “AI friend” is a far-from-adequate individual compensation for the alienating social world such computation has helped create.
4.2 Self-preventing Prophecy
Despite cautionary tales like Her and I’m Your Man, myriad persons already engage with “virtual boyfriends and girlfriends” (Ding Reference Ding2023).Footnote 14 As reported in 2023 about just one firm providing these services, Replika:
Millions of people have built relationships with their own personalized instance of Replika’s core product, which the company brands as the “AI companion who cares.” Each bot begins from a standardized template – free tiers get “friend,” while for a $70 premium, it can present as a mentor, a sibling or, its most popular option, a romantic partner. Each uncanny valley-esque chatbot has a personality and appearance that can be customized by its partner-slash-user, like a Sim who talks back.
Chastened in its metaversal ambitions, Meta has marketed celebrity chatbots to simulate conversation online. Millions of persons follow and interact with “virtual influencers,” who may be little more than a stylish avatar backed by a PR team (Criddle Reference Criddle2023).
For any persons who believe they are developing relationships with bots, online avatars, or robots, the arguments in Section 4.1 are bitter pills to swallow. The blandishments of affective computing may well reinforce alienation overall, but sufficiently simulate its relief (for any particular individual) to draw the attention and interest of many desperate, lonely, or merely bored persons. The abstractions of theory cannot match the importuning eyes, perfectly calibrated tone of voice, or calculatedly attractive appearance of online avatars and future robots. Yet human powers of imagination can still divert a critical mass of persons away from the approximations of Nozick’s “experience machine” dreamed of by too many in technology firms.
Consider the complexities of human–robot interaction envisioned in the hit HBO series Westworld. When asked if it sometimes questions the nature of its reality, the robot named Dolores Abernathy states in Season 1, “Some people choose to see the ugliness in this world. The disarray. I choose to see the beauty. To believe there is an order to our days, a purpose.” This refrain could describe a typical product launch for affective computing software, with its bright visions of a happier world streamlined with tech that always knows just what to say, just how to open and close your emails, just what emoji to send when you encounter a vexing text. Westworld envisions a theme park where calculated passion goes well beyond the world of bits, culminating in simulated (and then real) murders. The promise of the park is an environment where every bright, dark, or lurid fantasy can be simulated by androids almost indistinguishable from humans. It is the reductio ad absurdum (or perhaps proiectio ad astra) of the affective surround fantasized by Cusk’s depressed divorcee, deploying robotics to achieve what text, sound, and image cannot.
By the third season of Westworld’s Möbius strip chronology, Dolores breaks out of the park, driven to reveal to humans of the late twenty-first century that their fates are silently guided by a vast, judgemental, and pushy AI. While the last season of the show was an aesthetic mess, its reticulated message – of humans creating a machine to save themselves from future machines – was a philosophical challenge. How much do we need more computing to navigate the forbiddingly opaque and technical scenarios created by computing itself?
For transhumanists, the answer is obvious: human bodies and brains as we know them are just too fragile and fallible, especially when compared with machines. “Wetware” transhumanists envision a future of infinite replacement organs for failing bodies, and brains jacked into the internet’s infinite vistas of information. “Hardware” transhumanism wants to skip the body altogether and simply upload the mind into computers. AIs and robots will, they assume, enjoy indefinite supplies of replacement parts and backup memory chips. Imagine Dolores, embodied in endless robot guises, “enminded” in chips as eternal as stars.Footnote 15
The varied and overlapping efficiencies that advanced computation now offer make it difficult to reject this transhumanist challenge out of hand. A law firm cannot ignore large language models and the chatbots based on them, because these tools may not only automate simple administrative tasks now but also may become a powerful research tool in the future. Militaries feel pressed to invest in AI because technology vendors warn it could upend current balances of power, even though the great power conflicts of the 2020s seem far more driven by basic industrial capacities. Even tech critics have Substacks, Twitter accounts, and Facebook pages, and they are all subject to the algorithms that help determine whether they have one, a hundred, or a million readers. In each case, persons with little choice but to use AI systems are donating more and more data to advance the effectiveness of AI, thus constraining their future options even more. “Mandatory adoption” is a familiar dynamic: it was much easier to forego a flip phone in the 2000s than to avoid carrying a smartphone today. The more data any AI system gathers, the more it becomes a “must-have” in its realm of application.
Is it possible to “say no” to ever-further technological encroachments?Footnote 16 For key tech evangelists, the answer appears to be no. Mark Zuckerberg has fantasized about direct mind-to-virtual reality interfaces, and Elon Musk’s Neuralink also portends a perpetually online humanity. Musk’s verbal incontinence may well be a prototype of a future where every thought triggers AI-driven responses, whether to narcotize or to educate, to titillate or to engage. When integrated into performance-enhancing tools, such developments also spark a competitive logic of self-optimization. A person who could “think” their strategies directly into a computing environment would have an important advantage over those who had to speak or type them. If biological limits get in the way of maximizing key performance indicators, transhumanism urges us toward escaping the body altogether.
This computationalist eschatology provokes a gnawing insecurity: that no human mind can come close to mastering the range of knowledge that even a second-rate search engine indexes, and simple chatbots can now summarize, thanks to AI. Empowered with foundation models (which can generate code, art, speech, and more), chatbots and robots seem poised to topple humans from their heights of self-regard. Given Microsoft’s massive investments in OpenAI, we might call this a Great Chain of Bing: a new hierarchy placing the computer over the coder, and the coder over the rest of humans, at the commanding heights of political, economic, and social organization.Footnote 17
Speculating about the long-term future of humanity, OpenAI’s Sam Altman (Reference Altman2017) once blogged about a merger of humans and machines, perhaps as a way for the former to keep the latter from eliminating them outright. “A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species),” he wrote. “Most guesses seem to be between 2025 and 2075.” This logic suggests a singularitarian mission to bring on some new stage of “human evolution” in conjunction with, or into, machines. Just as humans have used their intelligence to subdue or displace the vast majority of animals, on this view, machines will become more intelligent than humans and will act accordingly, unless we merge into them.
But is this a story of progress, or one of domination? Interaction between machines and crowds is coordinated by platforms, as MIT economists Erik Brynjolffson and Andrew McAfee have observed. Altman leads one of the most hyped ones. To the extent that CEOs, lawyers, hospital executives, and others assume that they must coordinate their activities by using large language models like the ones behind OpenAI’s ChatGPT, they will essentially be handing over information and power to a technology firm to decide on critical future developments in their industries (Altman Reference Altman2017). A narrative of inevitability about the “merge” serves Altman’s commercial interests, as does the tidal wave of AI hype now building on Elon Musk’s X, formerly known as Twitter.
The middle-aged novelist who narrates Hari Kunzru’s (Reference Kunzru2020) Red Pill wrestles with this spectre of transhumanism, and is ultimately driven mad by it. Suffering writer’s block, he travels from his home in Brooklyn to Berlin, for a months-long retreat. Lonely and unproductive at the converted mansion he’s staying at, he becomes both horrified and fascinated by a nihilistic drama called Blue Lives, which features brutal cops at least as vicious as the criminals they pursue. Its dialogue sprinkled with quotes from Joseph de Maistre and Emil Cioran, Blue Lives appears to the narrator as something both darker and deeper than the average police procedural. He gradually becomes obsessed with the show’s director, Anton.
Anton is an alt-rightist, fully “red pilled,” in the jargon of transgressive conservatism. He also dabbles in sociobiological reflections on the intertwined destiny of humans and robots. The narrator relates how Anton described his views in a public speaking tour:
[Anton] spoke about his “program of self-optimization.” He worked out and took a lot of supplements, but when it came to bodies, he was platform-agnostic. Whatever the substrate, carbon-based or not, he thought the future belonged to those who could separate themselves out from the herd, intelligence-wise … Everything important would be done by a small cognitive elite of humans and AIs, working together to self-optimize. If you weren’t part of that, even selling your organs wasn’t going to bring in much income, because by then it would be possible to grow clean organs from scratch.
In a narcissistic short film celebrating himself, Anton announces that: “Around us, capital is assembling itself as intelligence. That thought gives me energy. I’m growing stronger by the day” (206).
The brutal logic here is obvious: some will be in charge of the machines, perhaps merging with them; most will be ordered around by the resulting techno-junta.Footnote 18 Dismissing “unproductive” humans as so many bodies is the height of cruelty (207). But it also fits uncomfortably well with a behaviourist robot rights ideology that claims that only what an entity does is what matters, not what it is (the philosophical foundation of Anton’s “platform agnosticism”). Nick Cave elegantly refutes this behaviourism in an interview exploring his recent work:
Maybe A.I. can make a song that’s indistinguishable from what I can do. Maybe even a better song. But, to me, that doesn’t matter – that’s not what art is. Art has to do with our limitations, our frailties, and our faults as human beings. It’s the distance we can travel away from our own frailties. That’s what is so awesome about art: that we deeply flawed creatures can sometimes do extraordinary things. A.I. just doesn’t have any of that stuff going on. Ultimately, it has no limitations, so therefore can’t inhabit the true transcendent artistic experience. It has nothing to transcend! It feels like such a mockery of what it is to be human.
As Leon R. Kass (Reference Kass2008) articulates, “Like the downward pull of gravity without which the dancer cannot dance, the downward pull of bodily necessity and fate makes possible the dignified journey of a truly human life.” For “make a song” in Cave’s passage, we could include so many other human activities: run a mile, play a game of chess, teach a class, console a mourning person, order a drink. We are so much more than what we do and make, bearing value that Anton appears unable or unwilling to recognize.
Alarmed by the repugnance of Anton’s message, the narrator becomes distressed by his success. He argues with him at first, accusing him of trying to “soften up” his Blue Lives audience to accept a world where “most of us [are] fighting for scraps in an arena owned and operated by what you call a ‘cognitive elite’.” (Kunzru Reference Kunzru2020, 208). He calls out Anton’s fusion of hierarchical conservatism and singularitarianism as a new Social Darwinism. But he cannot find a vehicle to bring his own counter-message to the world. The accelerationist logic of vicious competition, first among humans, then among humans enhanced by machines, and finally by machines themselves, signalling the obsolescence of the human form, is just too strong for him.Footnote 20 By the end of the novel, his attempt at a cri de coeur crumples into capitulation:
With metrication has come a creeping loss of aura, the end of the illusion of exceptionality which is the remnant of the religious belief that we stand partly outside or above the world, that we are endowed with a special essence and deserve recognition or protection because of it. We will carry on trying to make a case for ourselves, for our own specialness, but we will find that arrayed against us is an inexorable and inhuman power, manic and all-devouring, a power thirsty for the total annihilation of its object, that object being the earth and everything on it, all that exists.
The intertwined logic of singularitarianism, DeMaistrean conservatism, and contempt for humanity, seem to him inescapable. But Kunzru has his narrator come to this “realization” just as he is slipping into madness.
There are some visions of the future one must simply reject and cannot really argue with; their premises are simply too far outside the bounds of moral probity.Footnote 21 Eugenicist promotion of a humanity split by its degree of access to technology is among such visions. It is a dystopia (as depicted in series like Cyberpunk: Edgerunners and films like Elysium), not a rational policy proposal. The task of the intellectual is not to toy with such secular eschatologies, calculating the least painful glidepath toward them, or amelioration of their worst effects, but to refute and resist them to prevent their realization. The same can be said of “longtermist” rationales for depriving current disadvantaged persons’ of resources in the name of the eventual construction of trillions of virtual entities (Torres Reference Torres2021, Reference Torres2022). Considering them too deeply, for too long, means entertaining a devaluation of the urgent needs of humanity today – and thus of humanity itself.
4.3 Conclusion
It will take a deep understanding of political economy, ethics, and psychology (and their mutual influence) to bound our emotional engagement with ever more personalized and persuasive technology. In an era of alexithymia, machines will increasingly promise to name and act upon our mental states.Footnote 22 Broad awareness of the machines’ owners’ agendas will help prevent a resulting colonization of the lifeworld by technocapital (Pasquale Reference Pasquale2020a). Culture can help inculcate that awareness, as the films and novels discussed have shown.Footnote 23
The chief challenge now is to maintain critical distinctions between the artificial and the natural, the mechanical and the human. One foundation of computational thinking is “reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation” (Wing Reference Wing2004, 33). Yet there are fundamental human capacities that resist such manipulation, and particularly put us on guard against simulation. Reduction of an emotional state to, say, one of six “reaction buttons” on Facebook often leaves out much critical context.Footnote 24 Simulation of care by a robot does not amount to care, because it is not freely chosen. Carissa Veliz’s (Reference Veliz2023) suggestion that chatbots not use emojis is wise because it helps expose the deception inherent in representation of non-existent emotional states.
To be obliged to listen to robots as if they were persons or to care about their “welfare,” is to be distracted from more worthy ends and more apt ways of attending to the built environment. Emotional attachments to AI and robotics are not merely dyadic, encapsulated in a person’s and a machine’s interactions. Rather, they reflect a social milieu, where friendships may be robust or fragile, work/life balance well-respected or non-existent, conversations with persons free-flowing or clipped. It should be easy enough to imagine in which of those worlds robots marketed as “friends” or “lovers” would appear as plausible as human friends and lovers. That says more about their nature than whatever psychic compensations they afford.