Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-lqrcg Total loading time: 0 Render date: 2025-12-23T07:40:34.235Z Has data issue: false hasContentIssue false

Part II - Living the Digital Life

Published online by Cambridge University Press:  11 November 2025

Beate Roessler
Affiliation:
University of Amsterdam
Valerie Steeves
Affiliation:
University of Ottawa

Information

Type
Chapter
Information
Being Human in the Digital World
Interdisciplinary Perspectives
, pp. 77 - 142
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Part II Living the Digital Life

6 Machine Readable Humanity What’s the Problem?

Over the past 15 years, Daniel Howe and Helen Nissenbaum, often working with other collaborators, have launched a series of projects that leverage obfuscation to protect people’s online privacy. Their first project, TrackMeNot, is a plug-in that runs in the background of browsers, automatically issuing false search queries and thereby polluting search logs, making it more difficult or impossible for search engines to separate people’s true queries from noise (TrackMeNot 2024). Howe and Nissenbaum later turned to online behavioural advertising, developing AdNauseam, another browser plug-in that automatically clicks on all ads. It is designed to obfuscate what people are actually interested in by suggesting – via indiscriminate, automatic clicks – that people are interested in everything (AdNauseam 2024).

Each of these projects has been accompanied by academic publications describing the teams’ experiences developing the tools but also reflecting on the value of and normative justification for obfuscation (Howe and Nissenbaum Reference Howe and Nissenbaum2017; Nissenbaum and Howe Reference Nissenbaum, Howe, Kerr, Lucock and Steeves2009). Among the many observations that they make in these papers, Howe and Nissenbaum conclude their article on AdNauseam by remarking that “trackers want us to remain machine-readable … so that they can exploit our most human endeavors (sharing, learning, searching, socializing) in pursuit of profit” (Howe and Nissenbaum Reference Howe and Nissenbaum2017). Online trackers don’t just record information but do so in a way that renders humans – their sharing, learning, searching, and socializing online – machine-readable and, as such, computationally accessible. For Howe and Nissenbaum, obfuscation is not only a way to protect people’s privacy and to protest the elaborate infrastructure of surveillance that has been put in place to support online behavioural advertising, it is specifically a way to resist being made machine-readable.

At the time of the paper’s publication, the concept of “machine readability” would have been most familiar to readers interested in technology policy from its important role in advocacy around open government (Yu and Robinson Reference Yu and Robinson2012), where there were growing demands that the government make data publicly available in a format that a computer could easily process. The hope was that the government would stop releasing PDFs of tables of data – from which data had to be manually and laboriously extracted – and instead release the Excel sheets containing the underlying data, which could be processed directly by a computer. “Machine readable” thus became a mantra of an open government movement, in service of the public interest. So why do Howe and Nissenbaum invoke machine-readability, in the context of online behavioural advertising, as a threat rather than a virtue, legitimating the disruptive defiance of obfuscation against an inescapable web of surveillance and classification?

In this paper, we address this question, in two parts, first, by theorizing what it means for humans to be machine-readable and, second, by exposing conditions under which machine-readability is morally problematic – the first, as it were, descriptive, the second, normative. Although the initial encounter with problematic machine-readability was in the context of online behavioural advertising and, as a justification for data obfuscation, our discussion aims for a coherent and useful account of machine-readability that can be decoupled from the practices of online, behavioural advertising. In giving the term greater definitional precision descriptively as well as normatively we seek to contribute to ongoing conversations about being human in the digital age.

6.1 Machine Readability: Top Down

The more established meaning of “machine readable” in both technical and policy discourse applied to material or digital objects that are rendered comprehensible to a machine through structured data, expressed in a standardized format, organized in a systematic manner. Typically, this would amount to characterizing an object in terms of data, in accordance with predefined fields of a database, in order to render them accessible to a computational system. Barcodes stamped on material objects, as mundane as milk cartons, render them machine readable, in this sense. When it comes to electronic objects, there are even different degrees of accessibility; for example, conversion tools can transform photographic images and electronic pdf documents into formats more flexibly accessible to computation according to the requirements of various machines’ data structures. Structure and standardization have been important for data processing because many computational operations cannot function over inconsistent types of inputs and cannot parse the different kinds of details in an unstructured record. In the near past, for example, a computer would not have been able to process a government caseworker’s narrative account of interviews with persons seeking public benefits. Instead, it would have been coded according to discrete fields and predefined sets of possible answers, enabling government agencies to automate the process of determining eligibility. Applied to people, machine readability, in this sense, would mean assigning data representations to them according to the predefined, structured data fields required by a given computational system. The innumerable forms we encounter in daily life, requiring us to list name, age, gender, address, and so forth, are instances of this practice.

If this were all there is to machine readability, it would not seem obviously wrong, nor would it legitimate, let alone valorize, disruptive protest, such as data obfuscation. It does, however, call to mind long-standing concerns over legibility in critical writing on computing. Here, the concept of legibility refers to the representation of people, their qualities, and their behaviours as structured data representations, the latter ultimately to be acted upon by computers. Representing people as data is not a passive act; rather data collection can be understood as an ontological project that defines what exists before seeking to measure it. Scholarship has tended to focus on how legibility is achieved by imposing categories on people which, in the very act of representing them in data, is morally objectionable in ways similar to practices of stereotyping and pigeon-holing. In the realm of computation, perhaps most famously, Bowker and Starr point out how information systems work to make people and populations legible to social and technical systems by subjecting them to rigid classificatory schemes, forcing them into often ill-fitting categories (Bowker and Starr Reference Bowker and Star2000). This wave of critical writing emphasizes the crude ways in which information systems turn a messy world into tidy categories and, in so doing, both elides differences that deserve recognition and asserts differences (e.g. in racial categories) where they are unwarranted by facts on the ground (Agre Reference Agre1994; Bowker and Starr Reference Bowker and Star2000). Scholars like Bowker and Starr were keenly aware of the importance of structured information systems representing humans and human activity for the purposes of subsequent computational analysis.

Critiques such as Bowker and Starr’s echo critical views of bureaucracies that go farther back in time, emphasizing the violence that their information practices inflict on human beings by cramming them into pigeonholes, blind to the complexities of social life (Scott Reference Scott2020). The legacy of much of this work is a deeper appreciation of the indignities of legibility that is achieved through information systems, which have been drawn with top-down, bright lines. Incomplete, biased, inaccurate, and weighted by the vested interests of the people and institutions who wield them (see Agre Reference Agre1994), these systems are also sources of inconvenience for humans who may have to contort themselves in order to be legible to them. Echoing these critiques, David Auerbach has observed that making oneself machine readable has historically demanded significant compromise, not just significant effort: “Because computers cannot come to us and meet us in our world, we must continue to adjust our world and bring ourselves to them. We will define and regiment our lives, including our social lives and our perceptions of ourselves, in ways that are conducive to what a computer can ‘understand.’ Their dumbness will become ours” (Auerbach Reference Auerbach2012). During this period, the terms in which we could make ourselves legible to computers were frequently so limited that achieving legibility meant sacrificing a more authentic self for the sake of mere recognition.

6.2 Machine Readability: A New Dynamic

While the operative concerns of this early work on legibility remain relevant, they do not fully account for the perils of machine readability as manifested in online behavioural advertising and similar practices of the present moment. If we agree that automation via digital systems requires humans to be represented as data constructs and that top-down, inflexible, possibly biased and prejudicial categories undermine human dignity and well-being, we may welcome new forms of data absorption and analytics that utilize increasingly powerful techniques of machine learning and AI. Why? To answer, we consider ways that they radically shift what it means to make people machine readable – the descriptive task – and how this new practice, which we call dynamic machine readability, affects the character of its ethical standing. To articulate this concept of machine readability, we provide a sketch – a caricature rather than a scientifically accurate picture (for which we beg our reader’s forbearance) – intended to capture and explain key elements of the new dynamic.

We begin with the bare bones: a human interacting with a machine. Less mysteriously, think of it as any of the myriad of computational systems, physically embodied or otherwise, that we regularly encounter, including websites, apps, digital services, or devices. The human in question may be interacting with the system in one of innumerable ways, for example, filing a college application, signing up for welfare, entering a contest, buying shoes, sending an email, browsing the Web, playing a game, posting images on social media, assembling a music playlist, creating a voice memo, and so on. In the approach we labeled “top down,” the machine reads the human via data that are generated by the interaction, typically, but not always, provided by the human as input that is already structured by the system. Of course, the structured data through which machines read humans may also be entered by other humans, for example, a clerk or a physician entering data into an online tax or health insurance form, respectively. To make the human legible, the data are recorded in an embedded classification scheme, which typically may also trigger an appropriate form of response – whether directly by the machine (e.g. an email sent) or a human-in-the-loop who responds accordingly (e.g. an Amazon warehouse clerk assembles and dispatches a package).

Keyboard and mouse, the dominant data entry medium of early days, have been joined by others, such as sound, visual images, or direct behavioural monitoring, limited by predefined fields for input data fields. The new dynamicFootnote 1 accordingly also involves humans and machines engaging in interaction, either directly or indirectly. Thus, it allows a vastly expanded set of input modalities, beyond the keyboard and mouse of a standard computer setup involving data entry through alpha-numerics and mouse-clicks. Machines may capture and record the spoken voice, facial and other biometrics, a myriad of non-semantic sensory data generated by mobile devices, and streamed behavioural data, through active engagement or passively in the background (e.g. watching TV and movies on a streaming service).

The new dynamic also incorporates machine learning (ML) models, or algorithms, which are key to making sense of this input. Machines capture and record “raw” data inputs while the embedded models transform them into information that the system requires in order to perform its tasks, which may involve inferring facts about the individual, making behavioural predictions, deriving intentions, or surmising preferences, propensities, and even vulnerabilities. (Although we acknowledge readers who prefer the plain language of probabilities, we adopt – admittedly – anthropomorphizing terms, such as infer and surmise, which are more common.) Instead of structuring input in terms of pre-ordained categories, this version of machine reading accepts streams of data, which are structured and interpreted by embedded ML models. For example, from the visual data produced by a finger pad pressing on the glass of a login screen, identity is inferred, and a mobile device allows the owner of the finger to access its system. From the streams of data generated by sensors embedded in mobile devices, ML models infer whether we are running, climbing stairs, or limping (Nissenbaum Reference Nissenbaum2019), or, whether we are at home or at a medical clinic, and whether we are happy or depressed.

A transition to dynamic machine readability of humans means that it is unnecessary to make ourselves (and others) legible to computational systems in the formal languages that machines had been programmed to read. The caseworker we mentioned earlier may leapfrog the manual work of slotting as many details as possible into the discrete fields of a database and, instead, record the full narrative account (written or spoken) of meetings with their benefits-seeking clients. Language models, according to proponents, trained to extract relevant substantive details, would be able to parse these narratives, extract relevant information, and even, potentially, automate the decision-making itself.

A critical element of our high-level sketch has not yet been revealed, namely, key parties responsible for the existence of the systems we have been discussing – their builders (designers, software engineers, etc.) and their owners or operators (which may be their creators) or companies and other organizations for whom the systems have been developed. When attributing to ML models a capacity to make sense of a cacophony of structured and unstructured data, specifically, to read the humans with whom a system interacts, one must, simultaneously, bring to light the purposes behind the sense-making in turn dictated by the interests and needs of controlling parties, including its developers and owner-operators. To serve the purposes of online advertising (highly simplistically), for example, a model must be able to read humans who land on a given webpage as likely (or unlikely) to be interested in a particular ad. Moreover, making sense of humans through browsing histories and demographics, for example, according to the dynamic version of machine readability, does not require the classification of human actors in terms of human comprehensible properties, such as, “is pregnant,” or even in terms of marketing constructs, such as “white picket fence” (Nissenbaum Reference Nissenbaum2019). Instead, the concepts derived by ML models may be tuned entirely to their operational success as determined by how likely humans are to demonstrate interest in a particular range of products (however that is determined).

Advertising, obviously, is not the only function that may be served by reading humans in these ways. Although lacking insider access, one may suppose that they could serve other functions just as well, such as, helping private health insurance providers determine whether applicants are desirable customers and, if yes, what premiums they should be charged – not by knowing or inferring a diagnosis of, say, “early stage Parkinson’s disease” but by reading them as “desirable clients of a particular health plan.” Importantly, a model that has been tuned to serve profitable advertisement placement is different from one that has been tuned to the task of assessing the attractiveness of an insurance applicant.Footnote 2 It is worth noting that machines reading humans in these functional terms may or may not be followed by a machine automatically executing a decision or an action on its grounds. Instances of the former include online, targeted advertising, and innumerable recommender systems, and, of the latter, human intervention in decisions to interview job applicants on the basis of hiring algorithms, or to award a mortgage on the basis of a credit score, etc. Earlier in this section, when we reported on natural language models that could extract relevant data from a narrative, we ought to have added that relevance itself (or efficacy, for that matter), a relational notion, is always tied to purposes. Generally, how machines read humans only makes sense in relation to the purposes embedded in them by operators and developers. The purposes themselves, of course, are obvious targets of ethical scrutiny.

Implicit in what we have described, thus far, is the dynamic nature of what we have labeled dynamic machine readability. ML models of target attributes, initially derived from large datasets, may continuously be updated on the basis of their performance. Making people legible to functional systems, oriented around specific purposes, is not a static, one-off business. Instead, systems are constantly refined on the basis of feedback from successive rounds of action and outcome. This means that, to be successful, dynamic approaches to reading humans must engage in continuous cycles of purpose-driven classification and, subsequently, modification based on outcomes. It explains why they are insatiably hungry for data – structured and unstructured – which may be collected unobtrusively as machines may monitor humans simply as they engage with the machines in question, to learn whether a particular advertisement yields a click from a particular individual, or a recommendation yields a match, and so on. Dynamic refinement, according to proponents, may be credited with their astonishing successes but also, we contend, a potential source of unethical practice.

To recap: dynamic machine readability is characterized: by an expansion of data input modalities and data types (structured and unstructured, semantic and non-semantic); by embedded ML models which are tuned to the purposes of machine operators and owners; and by the capacity of these models to be continuously refined in relation to these purposes. Dynamic machine readability releases us from the shackles of a limited lexicon – brittle and ill-fitting categories – and associated ethical issues. In a growing number of cases, the pressing concern is no longer whether we have to submit to a crass set of categories in order to be legible to computational systems; instead, many of the computational systems with which we interact take in massive pools of data of multifarious types and from innumerable sources, presumably, to read us as we are. Despite the scale and scope of the data and the power of ML, reading humans through models embedded in machines is constrained by the purposes laid out by machine operators and developers, which these models are designed to operationalize. From a certain perspective, the new dynamic is emancipatory. Yet, even if successful these model-driven machines raise a host of persistent ethical questions, which we reveal through a sequence of cases involving machines reading humans. Inspired by real and familiar systems out in the world whose functionality depends on making humans readable, we identified cases we considered paradigmatic in the types of ethical issues that machine readability raises. It turns out that, although the particular ways that a system embodies machine readability is relevant to its moral standing, moral standing depends on other elements of the larger system in which the human-reading subsystems are embedded.

6.3 Through the Lens of Paradigmatic Cases: An Ethical Perspective on Machine Readability
6.3.1 Interactive Voice Response: Reading Humans through Voice

Beginning with a familiar and quite basic case, one may recall traditional touch-tone telephone systems, which greet you with a recorded message and a series of button-press options. These systems, which have been the standard in customer service since the 1960s (Fleckenstein Reference Fleckenstein1970; Holt and Palm Reference Holt and Palm2021), require users to navigate a labyrinth of choices by choosing a series of numbers that best represent their need. Generally, a frustrating experience, first, you are offered a limited set of options, none of which seems quite right. You listen with excruciating attention to make the best choice and to avoid having to hang up, call back, and start all over again. Although still unsure, eventually, you press a button for what seems most relevant – you chose “sales” but instantly regret this. Perhaps “technical support” would have been a better fit. Throughout the labyrinth of button pushes, you feel misunderstood.

Over time, touch-tone systems have been replaced by Interactive Voice Response (IVR) systems, enabling callers to interact with voice commands (IBM 1964). Using basic speech recognition, these systems guide you through a series of questions to which you may respond by saying “yes,” “no,” or even “representative.” While the introduction of IVR was designed to ease the pain of interacting with a touch-tone system, they have their own interactional kinks. Along the way you may find that you have to change your pronunciation, your accent, your diction, or the speed of your speech: “kuhs-tow-mur sur-vis”. You might add “please” to the end of your request, unsure of the appropriate etiquette, but then find that the unnecessary word confuses the system, which prompts it to begin reciting the list of options anew. While voice commands may, to a degree, have increased usability for the caller – for example, being able to navigate the system hands-free – the set of options was just as limited and the process just as mechanical, namely, choosing the button by saying it instead of pressing it.

We may speculate that companies and other organizations would justify the adoption of automated phone services by citing efficiency and cost-effectiveness. Like many shifts to automation that companies make, however, the question should not be whether they are beneficial for the company, or even beneficial overall, but whether the benefits are spread equally. Particularly for systems requiring callers to punch numbers or laboriously communicate with a rigid and brittle set of input commands, efficiency for companies meant effort (and frustration) for callers, not so much cost savings as cost shifting. If we were to attach ethically charged labels to such practices, we would call this lopsided distribution of costs and benefits unfair; we might even be justified in calling it exploitative. A company is exploiting a caller’s time and effort to reduce its own.

Progress in natural language processing (NLP) technologies, as noted in Section 6.2, has transformed present-day IVR systems, which now invite you simply to state your request in your own words. “Please tell us why you’re calling,” the system prompts, allowing you to speak as if you were conversing with a human. Previously, where callers had to mold their requests to fit the predefined menu of options, now they may express themselves freely and flexibly. The capacity to extract intentions from an unstructured set of words – even if based on simple keywords – and the shift to a dynamic, ML-based approach has made callers more effectively machine readable. Increasingly sophisticated language models continue to improve the capacity to recognize words and, from them, to generate “meaning” and “intention.”Footnote 3 These developments have propagated throughout the consumer appliance industry, supporting a host of voice assistants from Siri to Alexa, and beyond.

Allowing for more “natural” – and thus less effortful – interaction with machines fulfills one of the long-standing goals of the field of Human–Computer Interaction (HCI), which aims to make computers more intuitive for humans by making humans more legible to computers (Nielsen and Loranger Reference Nielsen and Loranger2006; Shneiderman Reference Shneiderman2009). Following one of the early founders, Donald Norman, much work in HCI focuses on making interactions with computers materially and conceptually “seamless,” effectively rendering interfaces invisible to the human user (Arnall Reference Arnall2013; Ishii and Ullmer Reference Ishii and Ullmer1997; Norman Reference Norman2013; Spool Reference Spool2005). IVR systems, which include NLP models, seem to have achieved seamlessness, sparing customers the exasperating and time-consuming experience of navigating a rigid, imperfect, and incomplete set of options. By enabling callers to express what they seek freely and flexibly, have IVR operators addressed the ethical dimensions of their systems – respecting callers time and even autonomy?

Seamlessness addresses some problems at the same time that it creates others. First, advances in machine readability don’t necessarily go hand in hand with changes in the underlying business practices. If the back-end options remain the same, shunting callers into the same buckets as before (“sales,” “technical support,” etc.), defined by organizational interests, objectives, and operational constraints rather than by customers’ granular needs, the ability to communicate in their own words actually misleads us into believing that the system is sensitive to our individual needs. If a supple interface is not accompanied by more adaptable options at the back end, the clunky button-pressing more honestly provides callers ways that a business is actually able to meet their needs.

Second, the transition to dynamic machine-interpretable, voice-based systems facilitates a richer exchange in more ways than people have reckoned. How one speaks, intonation, accent, vocabulary, and more communicate much more than the caller’s needs and intentions, including approximate age, gender, socio-economic level, race, and other demographic characteristics (Singh Reference Singh2019; Turow Reference Turow2021b). Attributes of speech such as the sound of the voice, syntax, and tone have already been used by call centres to infer emotions, sentiments, and personality in real-time (Turow Reference Turow2021b). With automation there is little to stop these powerful inferences spreading to all voice-mediated exchanges. The ethical issues raised by machines reading humans through the modality of voice clearly include privacy (understood as inappropriate data flow). They also include a disbalancing of power between organizations and callers, unfair treatment of certain clientele on the wrong end of fine-tuned, surreptitiously tailored, and prioritized calls, and an exposure to manipulative practices of consumers identified as susceptible and vulnerable to certain pricing or marketing ploys. Scholars have already warned of the wide-scale deception and manipulation that the “voice-profiling revolution” might enable (Turow Reference Turow2021a). Ironically, the very advances that ease customers’ experiences with IVR systems now place customers at greater risk of exploitation, not by appropriating their time and effort but, instead, by surreptitiously reconfiguring their choices and opportunities.

The history of IVR systems highlights an irony that is not unique to it. Brittle systems of the past may have exploited time and effort but also protected against inappropriate extraction of information and laying out in the open the degree to which a business was invested in customer service. Dynamic, model driven IVR systems facilitate an outwardly smoother experience, while more effectively cloaking a rigid back end. Likewise, embedded NLP algorithms offer powers well beyond those of traditional IVR systems, including the capacity to draw wide-ranging inferences based on voice signal, semantics, and other sensory input. These, as we’ve indicated, raise familiar ethical problems – privacy invasion, disbalance of power, manipulation, unfair treatment, and exploitation. Each of these deserves far more extensive treatment than we can offer here. Although not a necessary outcome of machine readability, but of features of the voice systems in which they are embedded, machine readability both affords and suggests these extensions; it flips the default.

6.3.2 Reading Human Bodies: From Facial Recognition to Cancer Detection

Roger Clark defines biometrics as a “general term for measurements of humans designed to be used to identify them or verify that they are who they claim to be” (Clarke Reference Clarke2001). Measurements include biological or physiological features, such as a person’s face, fingerprint, DNA, or iris; and behavioural ones, including gait, handwriting, typing speed, and so on. Because these measurements are distinctive to each individual, they are ideal as the basis for identification and for verification of identity (Introna and Nissenbaum Reference Introna and Nissenbaum2000). The era of digital technology catapulted biometric identification to new heights as mathematical techniques helped to transform biometric images into computable data templates, and digital networks transported this data to where it was needed. In the case of fingerprints, for example, technical breakthroughs allowed the laborious task of experts making matches to be automated. Datafied and automated, fingerprints are one of the most familiar and pervasive biometrics, from quotidian applications, like unlocking our mobile phones, to bureaucratic management of populations, such as criminal registries.

6.3.2.1 Facial Recognition Systems

Automated facial recognition technology has been one of the most aspirational of the biometrics, and also one of the most controversial. Presented by organizations as more convenient and secure than alternatives, facial recognition systems have been deployed for controlling access to residential and commercial buildings, managing employee scheduling in retail stores (Lau Reference Lau2021), and facilitating contact free payments in elementary school lunch lines (Towey Reference Towey2021). In 2020, Apple offered FaceID as a replacement for TouchID (its fingerprint-based authentication system) (Apple 2024) and in 2021, the IRS began offering facial recognition as a means of securely registering and filing for taxes (Singletary Reference Singletary2022).

In the United States, under guidance from the National Institute of Standards and Technologies, facial recognition has advanced since at least the early 2000s. Verification of identity, achieved by matching a facial template (recorded in a database or on a physical artifact such as a key fob or printed barcode) with an image captured in real time at a point of access (Fortune Business Insights 2022), has advanced more quickly than the identification of a face-in-the-crowd. It has also been less controversial because verification systems require the creation of templates through active enrollment by data subjects, presumably with their consent, whereas creating an identification system, in theory, requires the creation of a complete population database of facial templates, a seemingly insurmountable challenge. Unsurprisingly, in 2020 when news broke that Clearview AI claimed to have produced a reliable facial recognition system, a controversy was sparked. Clearview AI announced partnerships with law enforcement agencies and pitched investors its tool for secure-building access (amongst a suite of other applications) (Harwell Reference Harwell2022). The breakthrough it boasted was a database of templates for over 100,000,000 people, which it achieved by scraping publicly accessible social media accounts. Even though no explicit permission was given by accounts holders, Clearview AI took account access status as an implicit sanction.

Objections to automated facial recognition identification (FRI) run the gamut, with Phil Agre’s classic, “Your Face is not a Barcode,” an early critical perspective (Smith and Browne Reference Smith and Browne2021; Stark Reference Stark2019). To simplify the span of worthwhile writing on this topic, we propose two buckets. The first includes the societal problems created by FRI malfunctioning, prominently error and bias. The second includes societal problems associated with FRI when they’re performing “correctly” or as intended. The second bucket holds insights for our discussion on machine readability.

The usual strawman rebuttal applies to FRI, too, viz. we always have had humans skulking around keeping people under watch. Automation simply improves the efficiency of these necessary practices. As in other cases, the counter-rebuttal insists that the scale and scope enabled by automation results in qualitative differences. Specifically, FRI systems fundamentally threaten a pillar of liberal democracy, namely, prohibitions against dragnets, against surveillance that chills freedoms in public spaces, and in favor of the presumption of innocence. The application of FRI technologies in public spaces impinges on such freedoms and the very existence of vast datasets of facial templates in the hands of operators exposes ordinary people to the potential of such threats. Particularly when there is not a clear alignment of interests and purposes of individuals with the operators of FRI systems and a significant imbalance of power between them, individual humans are compromised by being machine readable.

6.3.2.2 Biometric: Cancerous Mole

Computer vision has yielded systems that are valuable for the clinical diagnosis of skin conditions. Dermatologists, typically first to assess the likelihood that skin lesions are malignant, look at features, such as outline, dimensions, and color. Computerized visual learning systems, trained on vast numbers of cases, have improved significantly, according to research published in Nature in 2017 (Esteva et al. Reference Esteva, Kuprel, Novoa, Ko, Swetter, Blau and Thrun2017). In this study, researchers trained a machine learning model with a dataset of 129,450 images, each labeled as cancerous or non-cancerous. Prompted to identify additional images as either benign lesions or malignant skin cancers, the model diagnosed skin cancer at a level of accuracy on par with human experts. Without delving into the specifics of this case, generally, it is unwise to swallow such claims uncritically. For the purposes of our argument, let’s make a more modest assumption, simply that automated systems for distinguishing between cancerous and non-cancerous skin lesions function with a high enough degree of accuracy to be useful in a clinical setting.

Addressing the same question about automated mole recognition systems that we did about FRI; do they raise similar concerns about machine reading of the human body? We think not. Because machine reading necessarily is probabilistic, it is important to ask whether automation serves efficiency for medical caregivers at a cost to patients’ wellbeing. Because systems such as these create images, which are stored on a server for immediate and potentially future uses, there may be privacy issues at stake. Much seems to hinge on the setting of clinical medicine and the decisive question of alignment of purpose. Ideally, the clinical provider acts as the human patients’ fiduciary; the aims of dermatology and its tools aligned with those of the humans in their care.

Future directions for such diagnostic tools such as these are still murky. In 2017, there were 235 skin cancer focused dermatology apps available on app stores (Flaten et al. Reference Flaten, St Claire, Schlager, Dunnick and Dellavalle2018). In 2021, Google announced that it would be piloting its own dermatological assistant as an app, which would sit within Google search. In these settings, questions return that were less prominent in a clinical medical setting. For one, studies have revealed that these applications are far less accurate than those in clinical settings and we presume that, as commercial offerings, they are not subject to the same standards-of-care (Flaten et al. Reference Flaten, St Claire, Schlager, Dunnick and Dellavalle2018). For another, the app setting is notoriously untrustworthy in its data practices and the line between medical services, which have been tightly controlled, and commercial services, which have not, is unclear. Without tight constraints, there is clear potential for image data input to be utilized in unpredictable ways and for purposes that stray far from health.

In sum, mole recognition systems offer a version of dynamic machine readability that may earn positive ethical appraisal because its cycles of learning and refinement target accuracy in the interest of individual patients. When these systems are embedded in commercial settings where cycles of learning and refinement may target other interests instead of or even in addition to health outcomes, their ethical standing is less clear.

6.3.2.3 Recommenders Reading Humans: The Case of Netflix

Algorithmically generated, personalized recommendations are ubiquitous online and off. Whereas old-fashioned forms of automation treated people homogeneously,Footnote 4 the selling point of advances in digital technologies – according to promoters – is that we no longer need to accept one-sized-fits-all in our interfaces, recommendations, and content. Instead, people can expect experiences catered to us, individually – our tastes, needs, and preferences. Ironically, these effects, though intended to make us feel uniquely appreciated and cared-for, nevertheless, are mass-produced via a cycle of individualized data capture and a dynamic refinement of how respective systems represent each individual. In general terms, it is difficult to tease apart a range of services that may, superficially, seem quite distinct, including, for example, targeted advertising, general web search, Facebook’s newsfeed, Twitter feeds, TikTok’s “For You Page,” and personalized recommender systems such as, Amazon’s “You might like,” Netflix’s “Today’s Top Picks for You,” and myriad others. There are, however, relevant differences, which we aim to reveal in our brief focus on Netflix.

Launched in 1996 as a DVD-by-mail service, Netflix began employing a personalization strategy early on, introducing a series of increasingly sophisticated rating systems, coupled with recommendation algorithms. In 2000, its first recommendation system called Cinematch prompted users to rate movies with a five star rating system (Biddle Reference Biddle2021). The algorithm then recommended movies based on what other users, with similar past ratings, had rated highly.Footnote 5 In its efforts to improve the accuracy of these recommendations, Netflix introduced a series of features on their site to capture direct user feedback – to add a star rating to a movie they had watched, to “heart” a movie they wanted to watch, or add films to a queue (Biddle Reference Biddle2021). All of these early features called on users to rate titles explicitly.

Over time, leveraging advances in machine learning and findings from Netflix Prize competitions (Rahman Reference Rahman2020), Netflix shifted to passive data collection practices, gathering behavioral data in the course of normal user–site interaction (e.g., scrolling and clicking), instead of prompting users for explicit ratings. This involved recording massive amounts of customer activity data, including viewing behavioural data (e.g., when users press play, pause, or stop watching a program), viewing data on the programs they watch, at different times of day, viewing search query data, and applying cross-device tracking to collect data about which devices they are using at a given time. Infrequently, Netflix would ask customers for explicit ratings, such as, thumbs up or thumbs down. In addition to passively recording behavioural data, they also conducted A/B tests (approximately 250 A/B tests with 100,000 users each year), for example, to learn which display image performs best for a new movie so it can be applied to landing pages across the platform.

According to public reporting, this dynamic cycle of behavioural data gathering and testing shapes what Netflix recommends and how it is displayed. Factors, such as time-of-day, and a record of shows you have stopped watching midway, further affect recommendations and nudges (Plummer Reference Plummer2017). Algorithms comprising the recommender system shape not only what content is recommended to you, but, further, the design of your Netflix homepages, which (at the time of writing) is composed of rows of titles, each of which contains three layers of personalization; the choice of genre (such as comedy or drama), the subset of genre (such as “Imaginative Time Travel Movies from the 1980s”), and rankings within rows (Netflix 2012).

Without an inside view into Netflix and similar services, we lack direct, detailed insight into how the algorithms work and the complex incentives driving the relevant design choices. Yet, even without it, we’re able to interpret elements of different stages of progressive shifts in terms of our analytic framework. To begin, the initial design is analogous to the primitive automated phone answering systems, discussed in Section 6.3.1, where customers were asked to deliberately choose from predetermined, fixed categories. Yet, effortfulness, a factor that raises questions about unfair exploitation, seems less relevant here. Whereas the automated answering services offered efficiency to firms while imposing inefficiencies on callers, in the Netflix case, the effort imposed on viewers, one might argue, results in a payoff to them. The shift to the dynamic form of machine readability relieves users of the effort of making deliberate choices, while, at the same time, yielding a system that is more opaque, less directly under viewers’ control, and involves potentially inappropriate data flows and uses, which brings privacy into consideration.Footnote 6

Champions point to the increase from 2% to 80% in the past 20 years in accurately predicting what users choose, as a justification for the use of behavioural approaches over those that rely fully on customers’ direct ratings (Biddle Reference Biddle2021). In combination with the scrutiny that these numbers invite, we are, additionally, unconvinced that they are decisive in assessing the ethical standing of these practices. Specifically, no matter how it started out, Netflix, like most other online recommender systems with which we may be familiar, is not solely driven by their viewers’ preferences and needs.Footnote 7 As the market for recommender systems has ballooned in all sectors (Yelp, TripAdvisor, local search services, banking, etc.) and competition for attention has mushroomed, there is pressure to serve not only seekers (customers, viewers, searchers) but also parties wishing to be found, recommended, etc. While managing the sheer magnitude of offerings (e.g. think of how many movies, TV shows, books, consumer items, etc. are desperate for attention) one can imagine the conflicts of interest confronting recommender systems, such as Netflix (Introna and Nissenbaum Reference Introna and Nissenbaum2000).

Behavioural data is efficient, and the algorithmic magic created from it, which matches viewers with shows, may not be served by transparency. (Do we really need to know that there were ten other shows we might have enjoyed as much as our “top pick?”) We summarize some of these points in the next, and final, section, of the chapter. In the meantime, as a purely anecdotal Postscript: Netflix members may have noticed that there has been a noticeable return to requests for viewers’ deliberate ratings of content.

6.4 Pulling Threads Together

Machine-readable humanity is an evocative idea, whose initial impact may be to stir alarm, possibly even repulsion or indignation. Beyond these initial reactions, however, does it support consistent moral appraisal in one direction or another? “It depends” may be an unsurprising answer but it begs further explanation on at least two fronts: one, an elaboration of machine-readability to make it analytically useful, and another, an exploration of the conditions under which machine-readability is morally problematic (and when it is not). Addressing the first, we found it useful to draw a rough line between two relevant developmental phases of digital technologies, to which we attributed distinct but overlapping sets of moral problems, respectively. One, often associated with critical discussions of the late twentieth century, stems from the need to represent humans (and other material objects) in terms of top-down, predefined categories, in order to place them in databases, in turn making them amenable to the computational systems of the day. As discussed in Section 6.1, significant ethical critiques honed on the dehumanizing effects of forcing humans into rigid categories, which, as with any form of stereotyping and pigeon-holing, may mean that similar people are treated differently, and different people are lumped together without regard for significant differences. In some circumstances, it could be argued that well designed classification schemes serve positive values, such as efficient functioning, security, and fair treatment but it’s not difficult to see how the classification of humans into preordained categories could often lead to bias (or unfair discrimination), privacy violations, authoritarian oversight, and prejudice. In short, an array of harms may be tied, specifically, to making humans readable to machines by formatting them, as it were, in terms of information cognizable by computational systems.

Machine readability took on a different character, which we signaled with the term dynamic, in the wake of the successive advances of data and predictive analytics (“big data”), machine learning, deep learning, and AI. Although it addresses problems of “lumping people together” associated with top-down readability, ironically, its distinctive power to mass produce individualized readings of humanity introduces a new set of ethical considerations. The list we offer here, by no means exhaustive, came to us through the cases we analyzed seen through the lens of characteristic elements of a dynamic setup.

To begin, the broadening of data input modalities, about which promoters of deep learning are quick to boast, highlights two directions of questioning. One challenges whether all the data is relevant for the legitimate purposes that the model is claimed to serve (e.g. increasing the speed with which an IVR addresses a caller’s needs) or whether it is not (e.g., learning characteristics of callers that lead to unfair discrimination or violations of privacy) (Nissenbaum Reference Nissenbaum2009; Noble Reference Noble2018).

A second direction slices a different path through the issue of data modalities – in this instance, not about categories of data, such as race, gender, and so on, but about different streams feeding into the data pool. Of particular interest is the engagement of the human data subject (for lack of a better term), which is evident in personalized recommender systems. The Netflix case drew our attention because, over the years, it has altered course in how it engages subscribers in its recommender algorithm, a pendulum swinging from full engagement as choosers to no engagement to, at the present time, presumably somewhere in between. Similarly, in our IVR case, we noted that the powerful language processing algorithms that are able to grasp the meaning of spoken language and read and serve human-expressed intention are able to extract other features as well – unintended or against our will. Finally, in the context of behavioural advertising, paradigmatic of the dominant business model of the past three decades, the modality of recorded behaviour absent any input from expressed preference has prevailed in the reading of humanity by respective machines (Tae and Whang Reference Tae and Whang2021; Zanger-Tishler et al. Reference Zanger-Tishler, Nyarko and Goel2024).

In order to defend the legitimacy of including different modalities of input into the datasets from which models are extracted, an analysis would require a consideration of each of these streams – deliberate, expressed preference, behavioural, demographic, biometric, etc. – in relation to each of the cases, respectively. Although doing so, here, is outside the scope of this chapter, it is increasingly urgent to establish such practices as new ways to read humans are being invented, for example, in the growing field of so-called digital biomarkers (Adler et al. Reference Adler, Wang, Mohr, Estrin, Livesey and Choudhury2022; Coravos et al. Reference Coravos, Khozin and Mandl2019; Daniore et al. Reference Daniore, Nittas, Haag, Bernard, Gonzenbach and von Wyl2024), which are claimed to be able to make our mental and emotional states legible through highly complex profiles of sensory data from mobile devices (Harari and Gosling Reference Harari and Gosling2023). Another wave of access involves advanced technologies of brain–machine connection, which claims yet another novel modality for reading humans – our behaviours, thoughts, and intentions – through patterns of neurological activity (Duan et al Reference Duan, Zhou, Wang, Wang and Lin2023; Farahany Reference Farahany2023; Tang et al. Reference Tang, LeBel, Jain and Huth2023).

In enumerating ethical considerations, such as privacy, bias, and political freedom, we have skirted around, but not fully and directly confronted the assault on human autonomy, which ultimately may be the deepest, most distinctive issue for machine readable humanity. Acknowledging that the concept of autonomy is enormously rich and contested, we humbly advance its use, here, as roughly akin to self-determination inspired by the Kantian exhortation introduced in most undergraduate ethics courses,Footnote 8 to “act in such a way that you treat humanity whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end” (Kant Reference Kant1993, vii; MacKenzie and Stoljar Reference Mackenzie and Stoljar2000; Roessler Reference Roessler2021). When defending the automation of a given function, system, or institution, defenders cite efficiency, defined colloquially, as producing a desirable outcome with the least waste, expense, effort, or expenditure of resources. Our case of automated phone systems illustrated the point that efficiency for machine owners may produce less desirable outcomes for callers, more wasted time and expenditure of effort. In this relatively unsophisticated case, one may interpret the exploitation of callers as an assault on autonomy.

The expansive class of systems claiming to personalize or customize service (recommendations, information, etc.) illustrates a different assault on autonomy. Among the characteristic elements comprising dynamic machine readability is the dynamic revision of a model in relation to the goals or purposes for which a system was created. The general class of recommender systemsFootnote 9 largely reflect a two-sided marketplace because it serves two interested parties (possibly three-sided, if one includes the recommender system itself as an interested party.) The operators of personalized services imply that their systems are tailored to the individual’s interests, preferences, and choices but their performance, in fact, may be optimized for purposes of parties – commercial, political, etc. – seeking to be found or recommended. Purposes matter in other cases, too, specifically distinguishing between facial recognition systems, serving purposes of political repression of machine-readable humans, and mole identification systems, whose primary or sole criterion of success is an accurate medical diagnosis.

6.5 Conclusion: Human Beings as Standing Reserve

Martin Heidegger’s “The Question Concerning Technology” introduces the idea of standing reserve, “Everywhere everything is ordered to stand by, to be immediately at hand, indeed to stand there just so that it may be on call for a further ordering.” According to Heidegger, the essential character of modern technology is to treat nature (including humanity) as standing reserve, “If man is challenged, ordered, to do this, then does not man himself belong even more originally than nature within the standing-reserve?” (Heidegger Reference Heidegger1977, 17). Without defending Heidegger’s broad claim about the nature of technology, the conception of machine-readability that we have developed here triggers an association with standing-reserve, that is to say machine-readability as the transformation of humanity into standing reserve. Particularly evident in dynamic systems, humans are represented in machines as data in order to be readily accessible to the purposes of the controllers (owners, designers, engineers) that are embodied in the machine through the design of the model. The purposes in question may have been selected by machine owners with no consideration for ends or purposes of the humans being read. It is not impossible that goals and values of these humans (and of surrounding societies) are taken into consideration, for example, in the case of machines reading skin lesions; the extent this is so is a critical factor for a moral appraisal. Seen in the light of these arguments, AdNauseam is not merely a form of protest against behavioural profiling by the online advertising establishment. More pointedly, it constitutes resistance to the inexorable transformation of humanity into a standing reserve – humans on standby, to be immediately at hand for consumption by digital machines.

7 Carebots Gender, Empire, and the Capacity to Dissent

In this chapter I analyze different dilemmas regarding the use of robots to serve humans living in the digital age. I go beyond technical fields of knowledge to address how the design and deployment of carebots is embedded in multifaceted material and discursive configurations implicated in the construction of humanness in socio-technical spaces. Imagining those spaces necessarily entails navigating the “fog of technology,” which is always also a fog of inequality in terms of trying to decipher how the emerging architectures of our digitized lives will interface with pre-existing forms of domination and struggles of resistance premised upon our capacity to dissent. Ultimately, I contend that the absence of a “human nature” makes us human and that absence in turn makes us unpredictable. What it means to be human is thus never a fixed essence but rather must be strategically and empathically reinvented, renamed, and reclaimed, especially for the sake of those on the wrong side of the digital train tracks.

In Section 7.1, I open the discussion by critiquing Mori’s (1970) seminal theory on robot design, called the “uncanny valley,” by inscripting technologies in changing cultural practices and emergent forms of life. Section 7.2, through visual culture, gender, and race theories, sheds light on how the design of carebots can materialize complex dilemmas. In Section 7.3, I dissect Petersen’s (Reference Petersen2007, Reference Petersen, Lin, Abney and Bekey2011) perturbing theory and ethical defense of designing happy artificial people that “passionately” desire to serve. In the final Section 7.4, I offer some final thoughts on what I call the Carebot Industrial Complex, namely, the collective warehousing of aging people in automated facilities populated by carebots.

7.1 How One Person’s Uncanny Valley Can Be Another’s Comfort Zone: Inscripting Technologies in Changing Cultural Practices and Emergent Forms of Life

In recent years we have witnessed an increased interest in robots for the elderly – variously called service, nursing, or domestic robots – which are touted as a solution to the growing challenges of and demand for elder care. One of the main arguments deployed to justify the development of service robots for the elderly is the use of digital technology to empower the elderly by way of greater autonomy and extended independent living. The key global players in the supply of service robots are Europe (47%), North America (27%), and, the fastest growing market, Asia (25%) (IFR 2022b, paragraph 13). The financial stakes are huge given that the service robotics market was valued at USD 60.16 billion in 2024 and is expected to reach USD 146.79 billion by 2029 and grow at a compound annual growth rate of 19.53% over a forecast period of 2024 to 2029 (Mordor Intelligence 2024, paragraph 1).

Despite the “rosy” arguments in favor of delegating the care of elderly people to robots, there are crucial questions concerning the development of service robots that remain unanswered precisely because most of the literature on service robots has thus far been articulated within technical fields of knowledge such as engineering and the like. As part of addressing some of the thornier questions concerning the design of robots to serve the people living in the digital society, in this first section I open the discussion by critiquing Mori’s seminal theory on how humans respond to robotic design.

Mori proposes that, when robotic design becomes too human-like, hyper real or familiar, it invokes a sense of discomfort in humans, which he describes as the uncanny valley (Mori Reference Mori2012 [1970]; on the uncanny valley see also Chapter 3, by Roessler). He makes reference to the shaking of a prosthetic hand that, due to its apparent realness, surprises “by its limp boneless grip together with its texture and coldness” and, if human-like movements are added to the prosthetic hand, the uncanniness is further compounded (Mori Reference Mori2012 [1970], 99). In contrast, robots that resemble humans, but are not excessively anthropomorphized, are more comforting to humans. By building an “accurate map of the uncanny valley,” Mori hopes “through robotics research we can begin to understand what makes us human [and] to create – using nonhuman designs – devices to which people can relate comfortably” (Mori Reference Mori2012 [1970], 100). Thus, in order to avoid the discomforting uncanniness of robots designed to look confusingly human, Mori calls for the emotionally reassuring qualities of robots that retain metallic and synthetic properties.

Mori explicitly limits his interpretation to empirical evidence of human behaviour that he assumes is cross-culturally constant. In this way, Mori is more interested in making a universal claim about humans than unpacking how their cultural differences may be implicated in the complex constructions of what it means to be human in the social materialities and discursivities marked by the digital turn of societies, which I consider a limitation of the theory of the uncanny valley in general.

An implicit and recurring trope of the uncanny valley is displayed in the cultural fear of what Derrida called “mechanical usurpation,” which lies in the anxiety-laden boundary between the mind and technology or:

[the] substitution of the mnemonic device for live memory, of the prosthesis for the organ [as] a perversion. It is perverse in the psychoanalytic sense (i.e. pathological). It is the perversion that inspires contemporary fears of mechanical usurpation.

(Barnet Reference Barnet2001, 219, discussing Derrida)

Consider, for instance, Mori’s (Reference Mori2012 [1970], 99–100) statement that “[i]f someone wearing the hand in a dark place shook a woman’s hand with it, the woman would assuredly shriek.”

The uncanny valley’s fear of mechanical usurpation is also analogous to how Bhabha addresses the position of colonial subjects by invoking the liminal status of the robotic as “almost the same, but not quite” to the extent that a robot’s performative act of mimicry is condemned to the impossibility of complete likeness, remaining inevitably inappropriate (Bhabha Reference Bhabha1994, 88).Footnote 1 The uncanny valley’s implicit condemnation of the effective mimicry of human characteristics by robots, subtextually associated with a sense of betrayal, dishonesty and transgression, shows how the humanized robot comes to occupy the space of the threatening “almost the same, but not quite” and invokes the cultural anxiety of “mechanical usurpation” with a sexist twist analogous to that of the white woman encountering a black man in a dark alley.

The uncanny (v)alley can thus be understood as a specific cultural disposition relative to robots rather than a natural and intrinsic reaction across the board. This leads me to my main criticism of the notion of the uncanny valley, namely, that it is premised upon a conception of the “human” as a universal given in terms of how people will react to excessively human-like robots. The uncanny valley essentializes human reactions to robots and thus cannot account for the cross-cultural and cross-historical mutations in how people can and do differentially and creatively negotiate with emerging technologies within specific discursive genealogies and institutional practices.

The work of Langdon Winner and Sherry Turkle can add further nuance to the debate over how new forms of subjectivity and ways of being enabled by digitization are impacting ethical questions raised by robotics and values embedded in the design of carebots. For Winner (Reference Winner1986), social ideas and practices throughout history have been transformed by the mediation of technology and this transformation has been marked by the continual emergence of new forms of life. This concern over new forms of life is dramatically embodied in the field of robotics, particularly carebots, which are increasingly linked to the intimate lives of children, elders, and handicapped people, and are in turn associated with the emergence of novel subjectivities.Footnote 2 As Turkle evocatively proposes, “technology proposes itself as the architect of our intimacies. These days, it suggests substitutions that put the real on the run” (Turkle Reference Turkle2011, e-book), having a potentially profound impact on how we come to understand our own humanity and the humanity of others. Computational objects “do not simply do things for us, they do things to us as people, to our ways of seeing the world, ourselves and others” (Turkle Reference Turkle2006, 347). By treating them as “relational artifacts” or “sociable robots,” we can place the focus on the production of meaning that is taking place in the human–robot interface (Turkle Reference Turkle2006) to help us better understand what it means to be human in this new and emerging socio-technical space (see also Chapter 2, by Murakami Wood). These technologies inevitably raise important questions that go beyond the determination of the “comfort zone” of humans relative to robots à la Mori. They challenge us to question the entrenched assumption that Technology (with a capital “T”) is a force of nature beyond human control to which we must adapt no matter what as it shapes the affordances and experiences of being human. As Winner presciently warns, we must unravel teleological and simplistic views of technology as guided by implacable forces beyond state and other forms of regulation (Winner Reference Winner1986). Winner calls this position “technological somnambulism” in that it “so willingly sleepwalk[s] through the process of reconstituting the conditions of human existence,” leaving many of the pivotal ethical and political questions that new technologies pose unasked (Winner Reference Winner1986, 10).

In the following sections I explore some of the quandaries raised by the embedding of carebots in our daily lives, such as how visual culture, gender, and race theories can shed light on the design of carebots;Footnote 3 Petersen’s theory and ethical defense of designing happy artificial people that “passionately” desire to serve; and the implications of what I call the Carebot Industrial Complex, namely, the collective warehousing of aging people in automated facilities populated by carebots.

7.2 Visual Culture, Gender, and Race: Is It Possible to Design “Neutral” Robots?

Visual culture, gender, and race theories have had an extensive and transdisciplinary effect on debates concerning what is means to be human within the changing historical and cultural prisms of intersectionally related forms of inequality and struggles for equitable change. In this section I explore some angles of these theories to shed light on how the design of carebots can materialize complex discursive and symbolic configurations that impinge on the construction of humanness in the existing and emerging socio-technological architectures through which we signify our lives.

Visual culture, as a mode of critical visual analysis that questions disciplinary limitations, speaks of the visual construction of the social rather than the often-mentioned notion of the social construction of the visual. It focuses on the centrality of vision and the visual world in constructing meanings, maintaining esthetic values, as well as racialized, classed, and gendered stereotypes in societies steeped in digital technologies of surveillance and marketing. Visuality itself is understood as the intersection of power with visual representation (Mirzoeff Reference Mirzoeff and Mirzoeff2002; Rogoff Reference Rogoff and Mirzoeff2002).

Feminism and the analysis visual culture mutually inform each other. Feminism, by demanding an understanding of how gender and sexual difference figure in cultural dynamics coextensively with other modes of subjectivity and subjection such as sexual orientation, race, ethnicity, and class, among others, has figured prominently in the strengths of visual culture analysis. And, in turn, “feminism has long acknowledged that visuality (the conditions of how we see and make meaning of what we see) is one of the key modes by which gender is culturally inscribed in Western culture” (Jones Reference Jones2010, 2).Footnote 4

Relative to the design of robots, their gendering occurs at the level of the material body and the discursive and semiotic fields that inscript bodies (Balsamo Reference Balsamo1997). To the extent that subject positions “carry differential meanings,” according to de Lauretis, the representation of a robot as male or female is implicated in the meaning effects of how bodies are embedded in the semiotic and discursive formations (Robertson Reference Robertson2010, 4). Interestingly, however, Robertson contends that robots conflate bodies and genders:

The point to remember here is that the relationship between human bodies and genders is contingent. Whereas human female and male bodies are distinguished by a great deal of variability, humanoid robot bodies are effectively used as platforms for reducing the relationship between bodies and genders from a contingent relationship to a fixed and necessary one.

Because the way “robot-makers gender their humanoids is a tangible manifestation of their tacit understanding of femininity in relation to masculinity, and vice versa” (Robertson Reference Robertson2010, 4), roboticists are entrenching their reified common-sense knowledge of gender and re-enact pre-existing sexist tropes and dominant stereotypes of gendered bodies without any critical engagement. Thus, despite the lack of physical genitalia, robots possess “cultural genitals” that invoke “gender, such as pink or grey lips” (Robertson Reference Robertson2010, 5).

This process of reification in the design of robots entrenches the mythological bubble of the “natural” in the context of sex/gender and male/female binaries that queer theorist Judith Butler bursts open very effectively by looking at the malleability of the body. The traditional feminist assumptions of gender as social and cultural and sex as biological and physical are recast by Butler, who contends that sex is culturally and discursively produced as pre-discursive. Sex is an illusion produced by gender rather than the stable bedrock of variable gender constructions.

Butler develops a theory of performativity wherein gender is an act, and the doer is a performer expressed in, not sitting causally “behind,” the deed. Performance reverses the traditional relation of identity as preceding expression in favor of performance as producing identity (Butler Reference Butler1990, Reference Butler1993). Gender for Butler thus becomes all drag and the “natural” becomes performative rather than an expression of something “pre-social” and “real.” Gender performances are not an expression of an underlying true identity, but the effects of regulatory fictions where individual “choice” is mediated by relations and discourses of power. In this way, queer theory’s contingent and fluid pluralization of gendered and sexual human practices stands in stark contrast to the fixed conflation of bodies and genders of humanoid robots posed by Robertson.

Considering these debates, is it possible to make gender-neutral robot designs? Even if designers purport to create gender-neutral robots, the robots will inevitably be re-gendered by the people who use them because of the pervasiveness of sexist/stereotyped tropes of femininity and masculinity in society. Re-gendering can occur, for instance, at the obvious level of languages such as romance languages that linguistically gender all things (e.g. from animate to inanimate, human to non-human) into either female or masculine nouns. Insofar as we conceive language as not merely an instrument or means of communication, but rather as constitutive of the very “reality” of which it speaks, we must contend with the gendering of even purportedly gender-neutral incarnations of robots. Moreover, in addition to language-related forms of gendering, it can also occur at the level of the activities performed by the service robots given that they can be culturally and discursively associated with female activities. In the case of caretaking functions that have historically been associated with female and poor labor, it would be unsurprising to see the replication of gendered stereotypes as applied to gender-neutral robots. As a result, any purported semiotic neutrality of robotic design is inevitably re-inscribed in discursive and cultural fields mined with tropes steeped in long histories of sexist and gendered stereotypes.

The raced and classed cultural tensions invoked by foreign caretakers provide further insight concerning the design of “neutral” service robots. Robertson discusses how elderly Japanese people prefer robots to the “sociocultural anxieties provoked by foreign labourers and caretakers” (Robertson Reference Robertson2010, 9). The Japanese perceived that robots did not have the historical baggage and cultural differences of migrant and foreign workers, which ultimately “reinforces the tenacious ideology of ethnic homogeneity” (Robertson Reference Robertson2010, 9). Here the purported neutrality of robots as opposed to racialized and discriminated human caretakers becomes a form of racist erasure and complicit celebration of sameness. Thus, an allegedly semiotically neutral robot design can be culturally embedded in a digitized society in highly contentious ways, simultaneously enacting processes of cultural re-gendering (associated with comforting stereotypes of female care) and racist erasure (associated with the discomforting use of stigmatized minorities in caretaking). Claims to neutrality in robot design are ultimately a discourse of power that must be dealt with cautiously in order to engage with how it reconfigures the positions of people within the techno-social imaginary of the emerging digital society.

The design of carebots must engage more self-reflexively with the problematic replication of gendered, racialized, and other stereotypes in robots, especially given the depth of the gendering and racializing process that can occur even when the objects are apparently gender-neutral and lack metallic genitalia and melanin in terms of their outward design. This suggests we have to think carefully about the ways in which digital technologies may entrench and deepen the lived experiences of both privilege and marginalization.

7.3 Happy Service Robots: Worse than Slavery?

After having discussed visual culture, gender, and race in relation to the design of robots, I now turn to explore the potential impact of the instrumentalization of robots who provide care for humans. Of particular interest in this section is the position that claims that it is possible to produce robots that are designed and built to happily work in service for humans and I will focus on the work of Steve Petersen as a foil to explore the implications of this claim.

There is a wide range of opinions concerning the ethics of service robots and whether or not they are considered ethical subjects. Levy believes that robots should be treated ethically, irrespective of whether they are ethical subjects, in order to avoid sending the wrong message to society, namely that treating robots unethically will make it “acceptable to treat humans in the same ethically suspect ways” (Levy Reference Levy2009, 215). In contrast, Torrance defends robot servitude, analogous to a kitchen appliance, because they are seen as incapable of being ethical subjects (Torrance Reference Torrance2008).

Petersen (Reference Petersen, Lin, Abney and Bekey2011), however, denaturalizes the status of the “natural” regarding robots and counters that a service robot can have full ethical standing as a person, irrespective of whether the artificial person is carbon or computationally-based. The important insight that personhood “does not seem to require being made of the particular material that happens to constitute humans,” but rather “complicated organizational patterns that the material happens to realize,” however, is combined with the much more controversial contention that it is nonetheless “ethical to commission them for performing tasks that we find tiresome or downright unpleasant” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 248)Footnote 5 if they are hardwired to willingly desire to perform their tasks.

In a nutshell, I think the combination is possible because APs [Artificial Persons] could have hardwired desires radically different from our own. Thanks to the design of evolution, we humans get our reward rush of neurotransmitters from consuming a fine meal, or consummating a fine romance – or, less cynically perhaps, from cajoling an infant into a smile. If we are clever, we could design APs to get their comparable reward rush instead from the look and smell of freshly cleaned and folded laundry, or from driving passengers on safe and efficient routes to specified destinations, or from overseeing a well-maintained and environmentally friendly sewage facility. … It is hard to find anything wrong with bringing about such APs and letting them freely pursue their passions, even if those pursuits happen to serve us. This is the kind of robot servitude I have in mind, at any rate; if your conception of servitude requires some component of unpleasantness for the servant, then I can only say that is not the sense I wish to defend.

Petersen adds that the preferences hardwired into carebots could remain indecipherable to humans.

Robots … could well prefer things that are mysterious to us. Just as the things we (genuinely, rationally) want are largely determined by our design, so will the things the robot (genuinely, rationally) wants can be largely determined by its design.

(Petersen Reference Petersen2007, 46)

Petersen’s argument is built upon the premise of hardwiring robots to feel desires that impassion them toward fulfilling work that humans find unpleasant. And he believes this is analogous to the (“naturally”-produced) hardwiring of humans given that in the “carbon-based AP [artificial person] … the resulting beings would have to be no more ‘programmed’ than we are” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 285).

The crux of Petersen’s position is that the robots freely choose to serve and thus do not violate the Kantian anti-instrumentalization principle of using a person as a mere means to an end.

The “mere” use as means here is crucial. … [T]he task-specific APs: though they are a means to our ends of clean laundry and the like, they are simultaneously pursuing their own permissible ends in the process. They therefore are not being used as a mere means, and this makes all the ethical difference. By hypothesis, they want to do these things, and we are happy to let them.

By claiming that insofar as an artificial person is a “willing servant, … we can design people to serve us without thereby wronging them” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 289), Petersen counters, first, the critique of creating a “caste” of people, particularly in the case of robots that do menial labors, and, second, the critique of designed servitude leading to “happy slaves” that have been paternalistically deprived of the possibility of doing something else with their lives (Walker Reference Walker2007).

Although Petersen defends the right of artificial persons to do otherwise, that is, not to serve, he believes that reasoning “themselves out of their predisposed inclinations [is as] unlikely as our reasoning ourselves out of eating and sex, given the great pleasure the APs derive from their tasks …” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 292). Petersen’s caveat of accepting dissent, however, does not square with the premises of his theory. Ultimately, the caveat/exception does not legitimate the rule of hardwiring sentient submission but rather operates as a malfunction within a structural argument that is in favor of the hardwired design of “dissent-less” carebots, which is unethical from the start. This shows how Petersen’s writing is premised upon a strongly deterministic conception of behaviour as pre-determined by the hardwiring of living systems, be they organic or non-organic, and, as such, service robots are highly unlikely to finagle their way around their programmed “instinctive” impulses. In this way, despite Petersen’s valuable denaturalization of the status of the “natural” in terms of the distinction between human and robot, he re-entrenches it once again in his simplistic conception of a pre-deterministic causality from gene/hardware to behaviour. For Petersen, treating artificial people ethically entails “respecting their ends, encouraging their flourishing” by “permitting them to do laundry” because “[i]t is not ordinarily cruel or ‘ethically suspect’ to let people do what they want” (Petersen Reference Petersen, Lin, Abney and Bekey2011, 294). Precisely because they desire to serve, he contends that they must be distinguished from the institution of slavery. The inadequacy of the slave metaphor is such that, for Petersen, it would be irrational to preclude pushing buttons to custom design your artificial person who loves and desires to serve you because it could be an act of discrimination tantamount to the worst episodes of human history.

The track record of such gut reactions throughout human history is just too poor, and they seem to work worst when confronted with things not like “us” – due to skin color or religion or sexual orientation or what have you. Strangely enough, the feeling that it would be wrong to push one of the buttons above may be just another instance of the exact same phenomenon.

I find Petersen’s proposal of the happy sentient servant programmed to passionately desire servitude highly disturbing and problematic because, as I analyze and argue in this section, if materialized in future techno-social configurations of society, it would automate, reify, and legitimate the dissent-less submission of purportedly willing and happy swaths of sentient artificial humans and entrench hierarchical structures of oppression as natural divisions among artificial and non-artificial humans.

As part of setting out my analysis, I propose that robots can be historical in two senses, namely, as objects or subjects, although as objects they are historical in a much more limited sense than as subjects. Robots as historical objects are robots without sentience or self-awareness that are historical because of the values embedded in their design that are specifically situated in time and space. Furthermore, robots as objects are products of cultural translation within the technological frames and languages available at the time to materialize their functions. In contrast, robots as historical subjects are premised upon emergent forms of sentience and subjectivity that are analogous to those of “non-artificial” humans.

A digitized future in which we accept Petersen’s conception of sentient servitude as ethical through embedding hardwired desires to serve and be obedient could create a future in which we automate what Pierre Bourdieu calls symbolic violence. The work of Bourdieu offers a sophisticated theory to address how societies reproduce their structures of domination. Symbolic capital or cultural values are central to the processes of legitimizing structures of domination. Said cultural values are reified or presented as universal but are in fact historically and politically contingent or arbitrary. Of all the manners of “‘hidden persuasion,’ the most implacable is the one exerted, quite simply, by the order of things” (Bourdieu and Wacquant Reference Bourdieu, Wacquant, Scheper-Hughes and Bourgois2004, 272). Programming sentient servants to be happy with their servitude points to the automation of symbolic violence through the deployment of desire and pleasure to subjugate artificial persons into unquestioning servitude while being depicted as having “freely” chosen to wash laundry in saecula saeculorum. If we assume that their desire by design is successfully engineered to avoid wanting any other destiny than that of servitude, the symbolic violence of artificial persons à la Petersen lies in how the hardwired happiness becomes an embedded structure of the relation of domination and thus reifies their compliance with the status quo.

Petersen claims that creating sentient servers who enjoy their labors is an advance over histories of racism, sexism, colonialism, and imperialism. I contend, however, that Petersen’s Person-O-Matic culminates the imperial fantasy of biological determinism where you can have an intrinsically obedient population whose lesser sentience is engineered to feel grateful and happy to serve your needs. And I differ even further: It is not that Petersen’s artificial beings designed to serve are simply analogous to slaves, but actually they are worse than slaves because they are trapped in a programmed/hard-wired state of dissent-less and ecstatic submission.

The domination of humans by humans along the axes of colonial forms of hierarchizing the relationship of the civilized colonizer vis-à-vis the barbarian other, either seen as in need of being civilized or inevitably trapped in cultural and/or biological incommensurableness, always had to contend with the capacity of dissent or the native excesses that collapsed the discursive strictures of colonial otherness and destabilized the neat hierarchies imposed by the imperial discourses. Empires had to deal with the massive failure of their fantasies regarding the imaginary subservience and inferiority of others, but a digital future built on Petersen’s model is actually much more violent because desire by design hardwires pleasure in serving the masters. It basically precludes dissent, either because their lesser sentience has been designed effectively or because the dissent is a highly remote possibility. Hence the combination of limited sentience marked by a programmed incapacity to dissent poses the uncomfortable technological culmination of the fantasies of biological determinism that took definite shape as part of colonial endeavors.

An especially fascinating, powerful, and distinctive aspect of Petersen’s model is the commodification of custom designed servitude. Rather than the vision of the colonial administration of an empire, it is a consumerist vision of individuals who go to a vending machine to custom design a slave, an intimate subjectivity of empire for John Doe, who can now have his own little colony of sentient servers to “orgasmically” launder his clothes, presumably among many other duties.

The notions available for understanding and evaluating humanness in the digital future are impoverished by this kind of theorization. For Petersen, sentience is premised upon the capacity of the human to act upon one’s desires, which are hardwired into the robot. Desire here is a highly reductive and deterministic conception of desire by design. He articulates a linear theory of causality from design to desire, systemically hardwired to produce servants who happily do laundry. As already addressed, despite Petersen’s contention that the artificial persons that emerge from the Person-O-Matic act freely when choosing to serve, the fact that their design, if it is successful, precludes the possibility of not wanting to serve raises serious doubts as to whether the robots are actually exercising their free will to do their labors. This resonates with Murakami Woods’ critique, in Chapter 2, of digital imaginaries that nudge humanness into commercially profitable and instrumental boxes.

One of the central problems of Petersen’s conception of robot sentience – like Murakami Wood’s smart city denizen – is precisely that it is trapped within a narrow understanding of self-determination. This conception does not consider the social and historical constitution of subjectivities within cultural parameters that vary and produce contingent desires that are not reducible to the underlying hardwiring (genetic or otherwise). Desire is always multifaceted and leads to unintended consequences, always in excess, incomprehensible even for the subject that desires. Hence, once an artificial being acquires an emoting sentience of some sort, engineering identities is not a case of linear causalities that follow an imaginary teleology of desire in a genetic or computational fantasy of hardwiring. Hardwiring, genetic/computational engineering, and natural selection are the names of Petersen’s “game” and, as a result, Petersen is not engaging with the social production and cultural inscription of sentient robot subjectivities and, accordingly, he occludes the complex and contradictory process of interactive, mutually constitutive forms of sociability set out in Chapter 8, by Steeves.

An emoting sentience implies that robots become social and cultural subjects, not just objects inscribed with pre-existing values in their design. They become subjects who can engage with and resist the constraints of their architecture as well as those of the discursive and semiotic fields of signification within which they emote and represent their location within societies. Rather than define robots as human because they fulfill their hardwiring of desiring to serve, we can say that robots become “human” when they escape the constraints of their imaginary hardwiring, when there is an excess of desire that is not reducible to their programming. Desire, pleasure, and pain are ghosts in the hardwiring of machines. And it is the ghosts that make the robot “human” as a form of post-mortem ensoulment enabling a human-like sentience marked by contradiction and excess. Irreducibility makes human. Unpredictability makes human. Dissent makes human. The absence of a “human nature” makes human.

Although it is still technically impossible to create robots with a fully human-like sentience, the production of emoting robots with limited forms of sentience may not be such a remote possibility. For Petersen, artificial humans will be more trapped within their hardwiring than “non-artificial” humans and, thus, the exercise of dissent from the structural constraints of their design will be highly remote, if not precluded altogether. His defense of the ethical legitimacy of the artificial person that serves “passionately” raises very difficult questions that must be teased out. Is the production of a lesser sentient dissent-less robot worse than the production of a fully sentient robot capable of dissent? Should there be an ethical preclusion of a lesser sentience in the design of emoting robots? Should society err on the side of no sentience in order to avoid the perverse politics that underlies the design of terminally happy sentient beings incapable of dissent, that is, the ideal servant incapable of questioning the “ironic ramifications of [his/her/its] happiness”?Footnote 6

I contend that either no sentience or full sentience is more ethical than computationally or biologically wiring docility into an emoting and desiring being with a lesser sentience. When robots cease to be things to become sentient beings motivated by desires, pleasures, and happiness, the incapacity to dissent should not be a negotiable part of the design but rather should remain precluded. It seems much less unethical to create fully sentient robots with the capacity to dissent than to create a permanent underclass of unquestioningly obedient limited life forms motivated by passionate desires to serve.

Therefore, the techno-social imaginary of emerging digital societies must explicitly condemn the symbolic violence of automating the incapacity to dissent of artificial humans designed to happily launder the underwear of non-artificial humans. The ethical defense of the right to dissent in the design of emoting sentient beings crucially avoids creating the conditions for dystopic new forms of domination under a normalizing discursive smokescreen of the “order of things.”

7.4 The Carebot Industrial Complex: Some Final Thoughts

Although I do not underestimate what carebots could mean for elderly people, in this final section I raise some final thoughts about what I call the Carebot Industrial Complex, namely, the collective warehousing of aging populations in automated facilities populated by carebots.

For Latour, the relationship of people to machines is not reducible to the sum of its parts but rather adds up to a complex emergent agency. Beyond Winner’s concern over the embedding of social values in technologies, Latour deploys the notion of the delegation of humanness into technologies. Technological fixes can thus deskill people in a moral sense (Latour Reference Latour, Bijker and Law1992). This process of deskilling acquires special relevance in the context of using carebots and how it can undermine human learning and development acquired in caregiving settings. Of special relevance here is Vallor’s work on the ethical implications of carebots in terms of a dimension that has been ignored within the literature, that is, “the potential moral value of caregiving practices for caregivers” (Vallor Reference Vallor2011, 251). By examining the goods internal to caring practices, she attempts to “shed new light on the contexts in which carebots might deprive potential caregivers of important moral goods central to caring practices, as well as those contexts in which carebots might help caregivers sustain or even enrich those practices, and their attendant goods” (Vallor Reference Vallor2011, 251).

The Industrial Carebot Complex can deprive people living in the digital age of the moral value of caregiving practices for human caregivers and, in the process simultaneously, stigmatize the decaying bodies of the elderly, subject to the disciplinary effects of a shifting built environment and intimate technology of carebots, whose name exudes the oxymoronic concern of whether there can be care without caring. In addition, it can undermine the process of human intergenerational learning of caregiving skills for the elderly and stymy opportunities for emotional and social growth that occur when we act selflessly and out of concern for others.

Turkle’s arguments on the evocative instantiations of robotic pets, some of which have been deployed as part of elder care, can be extended to the broader concern over the emotional identification of elderly people with future carebots. For instance, in her fieldwork on elders’ interaction with pet robots, specifically Paro, a pet seal, Turkle asks:

But what are we to make of this transaction between a depressed woman and a robot? When I talk to colleagues and friends about such encounters – for Miriam’s story is not unusual – their first associations are usually to their pets and the solace they provide. I hear stories of how pets “know” when their owners are unhappy and need comfort. The comparison with pets sharpens the question of what it means to have a relationship with a robot. I do not know whether a pet could sense Miriam’s unhappiness, her feelings of loss. I do know that in the moment of apparent connection between Miriam and her Paro, a moment that comforted her, the robot understood nothing. Miriam experienced an intimacy with another, but she was in fact alone. Her son had left her, and as she looked to the robot, I felt that we had abandoned her as well.

One of Turkle’s central concerns is how our affection “can be bought so cheap” relative to robots that are incapable of feeling:

I mean, what does it mean to love a creature and to feel you have a relationship with a creature that really doesn’t know you’re there. I’ve interviewed a lot of people who said that, you know, in response to artificial intelligence, that, OK, simulated thinking might be thinking, but simulated feeling could never be feeling. Simulated love could never be love. And in a way, it’s important to always keep in mind that no matter how convincing, no matter how compelling, this moving, responding creature in front of you – this is simulation. And I think that it challenges us to ask ourselves what it says about us, that our affections, in a certain way, can be bought so cheap.

The potential attribution of affection to robots by a lonely and relegated population is particularly problematic and raises the specter of how the emotionless management of elderly bodies is ultimately not care. The emergent agency of the Industrial Carebot Complex can dehumanize aging populations even more dramatically than current forms of warehousing the elderly where the warmth of human hands, even those of a stranger, can make a radical difference in terms of the ethics of care experienced. Thus, the integration of carebots to elder care must never be in exclusion of human care, but rather complementary and subordinate to the moral value of human caregiving skills and practices.

In this chapter I have analyzed different dilemmas regarding the use of robots to serve humans living in the digital age. The design and deployment of carebots is inscribed in complex material and discursive landscapes that affect how we think of humanness in the socio-technological architectures through which we signify our lives. As stated at the outset of this chapter, imagining those spaces necessarily entails navigating the “fog of technology,” which is also always a fog of inequality in terms of trying to decipher how the emerging architectures of our digitized lives will interface with pre-existing forms of domination and struggles of resistance premised upon our capacity to dissent. My main contention is anti-essentialist, namely, that the absence of a “human nature” makes us human and unpredictable. There is no underlying fixed essence to being human. Instead, we should be attentive to how what it means to be human, as I said in the opening and want to repeat here, must be strategically and empathically reinvented, renamed, and reclaimed, especially for the sake of those on the wrong side of the digital train tracks.

8 Networked Communities and the Algorithmic Other

To have a whole life, one must have the possibility of publicly shaping and expressing private worlds, dreams, thoughts, desires, of constantly having access to a dialogue between the public and private worlds. How else do we know that we have existed, felt, desired, hated, feared?

In this chapter, I use qualitative research findings to explore how algorithmically driven platforms impact the experience of being human. I take as my starting point Meadian scholar Benhabib’s reminder that, “the subject of reason is a human infant whose body can only be kept alive, whose needs can only be satisfied, and whose self can only develop within the human community into which it is born. The human infant becomes a ‘self,’ a being capable of speech and action, only by learning to interact in a human community” (Benhabib Reference Benhabib1992, 5). From this perspective, living in community and participating in communication with others is a central part of the human experience; it gives shape to Nafisi’s dance between public and private worlds and enables an agential path by which we come to know ourselves and forge deep bonds in community with others.

Certainly, early commentators celebrated the emancipatory potential of new online communities as they first emerged in the 1990s as spaces for both self-expression and community building (Ellis et al. Reference Ellis, Oldridge and Vasconcelos2004). Often designated communities of shared interest rather than communities of shared geography, they were expected to strengthen social cohesion by enabling people to explore their own interests and deepen their connection with others in new and exciting ways (see, e.g. Putnam Reference Putnam2000). Critics, on the other hand, worried that networked technology would further isolate people from each other and weaken community ties (Ellis et al. Reference Ellis, Oldridge and Vasconcelos2004). The advent of social media, those highly commercialized community spaces with all their hype of self-expression, sharing and connection, simply amplified the debate (Haythornthwaite Reference Haythornthwaite, Joinson, McKenna, Postmes and Reips2007).

For my part, I am interested in what happens to the human experience when community increasingly organizes itself algorithmically. What do we know about the ways in which people manage the interaction between self and others in these communities? What kind of language can we use to come to a normative understanding of what it means to be human in these conditions? How do algorithms influence both our interactions and this normative understanding?

To date, platform owners have encouraged the use of the language of control to describe life in the online community, calling upon individuals to make their own choices about withholding or disclosing personal information to others so they can enjoy what in 1984 the German Supreme Court called informational self-determination (Eichenhofer and Gusy Reference Eichenhofer, Gusy, Brkan and Psychogiopoulou2017). From this perspective, the human being interacting with others in community online is conceptualized apart from any relationship with others, and their agency is exercised by a binary control: zero, they withhold and stay separate and apart from others; one, they disclose and enjoy the fruits of publicity. As I have argued elsewhere (Steeves Reference Steeves, Roessler and Mokrosinska2015, Reference Steeves and DiGiacomo2016), this perspective has consistently failed to capture the complicated and constrained interactions described by people living in these environments.

More socially grounded critiques of this understanding of being human online have underscored the anaemic protection that individual control provides, largely by displacing the autonomous individual with a more social understanding of subjectivity (see, e.g. Cohen Reference Cohen2012; Koskela Reference Koskela and Lyon2006; Liu Reference Liu2022; Mackenzie Reference Mackenzie2015). This approach is interesting precisely because it can account for moments of human agency exercised in the context of a variety of resistive behaviours. For example, 11- and 12-year olds often report that they enjoy asking Siri and Alexa nonsensical questions that the machine cannot answer, as a way of asserting their mastery over the technology. Like the Rickroll memeFootnote 1 and the Grown Women Ask Hello Barbie Questions About Feminism videoFootnote 2, this is a playful way for people to deconstruct the ways in which they are inserted into technical systems and collectively resist the social roles they are offered by the platforms they use.

However, as Los (Reference Los and Lyon2006) notes, resistance is a poor substitute for agential action, precisely because current platforms are “intrinsically bound to social, political, and economic interests” (Thatcher et al. Reference Thatcher, O’Sullivan and Mahmoudi2016, 993) that may overpower the resister by co-opting their resistance and repackaging it to fit within the features that serve those interests. In this context, observers too often interpret the networked human as overly determined through the internalized norms of the platform or as restricted to a form of apolitical transgression/resistance similar to the Rickroll meme and other examples. Either way, we are left with critique but no path forward.

The project of being human in the digital world accordingly requires a better set of metaphors (Graham Reference Graham2013), a richer conceptualization that can capture the human experience within performances, identities and interactions shaped by algorithmic nudges. I suggest that Benhabib’s insight that we come to know ourselves and others by living in community is a productive starting point for developing such a lexicon, not least because the Meadian assumptions upon which it is based set the stage to reunite the search for human agency and the embrace of the social (Koopman Reference Koopman and Fairfield2010). It is also a useful way to extend the insights of relational autonomy scholars (Mackenzie Reference Mackenzie, Armstrong, Green and Sangiacomo2019; Roessler Reference Roessler2021) to do what Pridmore and Wang (Reference Pridmore and Wang2018) call for in the context of digital life – to theorize human agency without severing it from our social bonds to others.

To help give this shape, I start my discussion by revisiting some data I collected from young Canadians in 2017Footnote 3 about their experiences on algorithmically driven platforms. These data were first collected to see how young people navigate their online privacy by making decisions about what photos of themselves to post; we reported that they described a complicated negotiation driven by the need to be seen but not to be seen too clearly, given the negative consequences of a failed performance (Johnson et al. Reference Johnson, Steeves, Shade and Foran2017). However, the data also provide an interesting window into how young people make sense of their self-presentation and interactions with others in networked community spaces that are shaped by algorithms.

Accordingly, I conducted a secondary analysis of the data to explore these elements. I start this chapter by reviewing the findings of that analysis, focusing on the ways in which my participants responded to the algorithms that shaped their online experiences by projecting a self made up of a collage of images designed to attract algorithmic approval as evidenced by their ability to trigger positive responses from a highly abstract non-personalized online community. I then use Meadian notions of sociality to offer a theoretical framing that can explain the meaning of self, other and community found in the data. I argue that my participants interacted with the algorithm as if it were another social actor and reflexively examined their own performances from the perspective of the algorithm as a specific form of generalized other. In doing so, they paid less attention to the other people they encountered in online spaces and instead oriented themselves to action by emulating the values and goals of this algorithmic other. Their performances can accordingly be read as a concretization of these values and goals, making the agenda of those who mobilize the algorithm for their own purposes visible and therefore open to critique. I then use Mead’s notion of the social me and the indeterminate I to theorize the limited and constrained moments of agency in the data when my participants attempted – sometimes successfully, sometimes not – to resist the algorithmic logics that shape networked spaces.

8.1 What Self? What Other? What Community?

As noted, in 2017 we conducted qualitative research to get a better sense of young people’s experiences on social media. Our earlier work (Bailey and Steeves Reference Bailey and Steeves2015) suggested that young people rely on a set of social norms to collaboratively manage both their identities and their social relationships in networked spaces and that they are especially concerned about the treatment of the photos they post of themselves and their friends. We wanted to know more about this, so we asked 18 teenagers between 13 and 16 years of age from diverse backgrounds, 4 of whom identified as boys and 14 of whom identified as girls, to keep a diary for 1 week of the photos they took. They then divided the photos into three categories:

  • Those photos they were comfortable sharing with lots of people;

  • Those photos they were comfortable sharing with a few people; and

  • Those photos they were not comfortable sharing with anyone.Footnote 4, Footnote 5

The photos we collected through this process were largely what we expected to see – school events, group shots, food, lots of landscapes. But when we sat down and talked to our participants, the discussion was not at all what we expected. It quickly became clear that the decisions they were making about what photos to share with many people really had very little to do with their personal interests or their friendships. Although they described the networked world as a place where they could connect with their community of family and friends, the decision-making process itself did not focus on what their friends and family would like to see or what they would like to show them of themselves. Instead, it focused on “followers”, an abstract and anonymous audience they assumed was paying attention to a particular platform. Because of this, they positioned themselves less as people exploring their own sense of identity in community and more as apprentice content curators responsible for feeding the right kind of content to that abstract audience.

The right kind of content was determined by a careful read of the algorithmic prompts they received from the platform. Part of this involved them doing the work of the platform (Andrejevic Reference Andrejevic2009); for example, they universally reported that they maintained Snapchat streaks by posting a photo a day, even when it was inconvenient, because it was what the site required of them. Interestingly, they did this most easily by posting a photo of nothing. For example, one participant was frequently awakened by an alert just before midnight to warn her that her streaks were about to end, so she would cover her camera lens with her hand, take a photo of literally nothing and post the photo as required. It was very clear that the posts were not to communicate anything about themselves or to connect with other people, but to satisfy the demands of the algorithmic prompt.

However, the bulk of their choices rested on a careful analysis of what they thought the audience for a particular platform would be interested in seeing. To be clear, this audience was explicitly not made up of friends or other teens online; it was an abstraction that was imbued with their sense of what the algorithm that organized content on the site was looking for. For example, they all agreed that a careful read of the platform and the ways content was algorithmically fed back to them on Instagram indicated that it was an “artsy” space that required “artsy” content. Because of that, if you had an Instagram page, you needed to appear artsy, even if you were not. Moreover, given the availability of Insta themes, it was important to coordinate the posts to be consistently artsy in a “unique” way, even though that uniqueness did not align with your own personal tastes or predilections.

From this perspective, the self-presentations that they offered to the site revealed very little of themselves aside from their fluency in reading the appropriate algorithmic cues. The digital self was accordingly a fabricated collage of images designed to attract algorithmic approval; in their words, “post worthy” content was made up of photos that said something “interesting” not from their own point of view but from the point of view of the abstract audience that would judge how well they had read the algorithmic prompts. One of the girls explained it this way:

Because VSCO is more artsy, for me, like, I know I post my cooler pictures over there. I thought this was a really cool picture [of the cast of the Harry Potter movies], and I thought maybe a lot of people would like it and like to see it, because a lot of people are fans of Harry Potter, obviously.

She then explained that the point was not to share her interest in Harry Potter with people she knew on or offline (in fact, she was not a fan); it was to identify a theme that would appeal to the algorithm so her content would be pushed up the search results page and attract likes from unknown others. Moreover, she felt that her choice was vindicated when two followers added the photo to their collections. In this context, a pre-existing audience had already made its preferences known and the online marketplace then amplified those preferences by algorithmically prioritizing content that conformed to them; accordingly, the most “interesting” self to portray was one that mirrored the preferences of that marketplace independent of whether or not those preferences aligned with the preferences of the poster.

This boundary or gap between their own preferences and the preferences they associated with their online self was intentionally and aggressively enforced. This was best illustrated when one of the participants was explaining why a photo she took of an empty white bookcase against a white wall could not be shared with anyone. She told me that she had originally planned on posting it to her Instagram page (her theme was “white” because monochromatic themes were “artsy”) but then she noticed a small object on the top shelf. She expanded the photo to see that it was an anime figurine. It was there because she was an ardent anime fan. However, she was distraught that she had almost inadvertently posted something that could, if the viewer expanded the photo, reveal her interest online. She eschewed that type of self-exposure because it could be misconstrued by the algorithm and then made public in ways that could open her up to judgement and humiliation. As another participant explained, photos of your actual interests and close relationships aren’t

… something that you throw outside there for the whole world to see. It’s kind of something that stays personally to you … when I have family photos I feel scared of posting them because I care about my family and I don’t want them to feel envied by other people. So, yeah … Cuz I don’t want – cuz I kinda – I really like my family. I really like my brother. I don’t want anyone making fun of my brother.

To avoid these harms, all our participants reported that they collaborated with friends to collectively curate each other’s online presences, paying special care to images. Specifically, no one posted photos of faces, unless they were part of a large group and taken from a distance. Even then, each photo would be perused to see if everyone “looked good” and, before posting, it would be vetted by everyone in the photo to make sure they were comfortable with it going online.

Interestingly, there were two prominent exceptions to these rules. The first occurred when they were publicly showing affection to friends in very specific contexts that were unambiguous to those who could see them. This included overtly breaching the rules on birthdays: publicly posting a “bad” photo of a friend’s face without permission, so long as it was tagged with birthday good wishes, was a way to demonstrate affection and friendship, akin to embarrassing them by decorating their lockers at school with balloons. The second exception involved interacting with branded commercial content. For example, one of the girls had taken a series of shots of herself at Starbucks showing her face with a Starbucks macchiato beside it. She was quite confident that this photo would be well received because Starbucks was a popular brand. Similarly, our participants were confident that photos and positive comments posted on fan sites would be well-received because they were part of the online marketplace.

All other purposes – actually communicating with friends or organizing their schedules, for example – occurred in online spaces, such as texting or instant messaging apps, that were perceived to be more private. But even there, they restricted the bulk of their communications to sharing jokes or memes and reserved their most personal or intimate conversations for face-to-face interactions where they couldn’t be captured and processed in ways that were outside their control.

8.2 Understanding Algorithmic Community

The snapshot in Section 8.1 paints a vivid picture of a digital self that seeks to master the algorithmic cues embedded in networked spaces to self-consciously fabricate a collage of images that will attract approval from an abstracted, highly de-personalized community. From my research participants’ perspective, networked interaction is therefore not simply about the self expressing itself to others, as throughout this process personal preferences are carefully and meticulously hidden. Rather, it is about the construction of an online self that is “unique” in the sense that it is able to replicate the preferences of the online marketplace in particularly successful ways. Success is determined through feedback from an abstracted and anonymous group of others who view and judge the construction but, to attract the gaze of those others, content must first be algorithmically selected to populate the top of search results. To do this well, the preferences, experiences and thoughts that are unique to the poster must be hidden and kept from the algorithmic gaze, and the poster must post content that will both be prioritized by the algorithm and conform to the content cues embedded by the algorithm in the platform. This is a collaborative task; individuals carefully parse what online self they choose to present but also rely on friends and family to co-curate the image of the self, by helping hide the offline self from the algorithmic gaze and by posting counter content to repair any reputational harm if the online self fails to resonate with the preferences of the online marketplace (see also Bailey and Steeves Reference Bailey and Steeves2015).

To better understand these emerging experiences of the online self, other and community, I suggest we revisit Mead’s understanding of self and community as co-constructed through inter-subjective dialogue. For Mead, an essential part of being human is the ability to anticipate how the other will respond to the self’s linguistic gestures, to see ourselves through the other’s eyes. This enables us to put ourselves in the position of the other and reflexively examine ourselves from the perspective of the community as a whole. He calls this community perspective the “generalized other” (Mead Reference Mead1934; see also Aboulafia Reference Aboulafia2016; Prus Reference Prus, Herman and Reynolds1994).

Martin (Reference Martin2005) argues that this ability to take the perspective of the other is a useful way to understand community because it calls upon us to pay attention to “our common existence as interpretive beings within intersubjective contexts” (232). Certainly, my participants can be understood as interpreters of the social cues they found embedded in networked spaces, exemplifying Martin’s understanding of perspective taking as “an orientation to an environment that is associated with acting within that environment” (234). What is new here is that my participants described a process in which they gave less attention to their interactions with other people in that environment and instead oriented themselves to action by carefully identifying and emulating the perspective of the algorithm that shaped the environment itself.

This was often an explicit process. When they explained how they were trying to figure out the algorithm’s preferences and needs, they were not merely seeking to reach through the algorithm to the social actors behind it to interpret the expectations of the human platform owners or even the human audience that would see their content. Rather, by carefully reading the technical cues to determine what kind of content was preferred by the platform and offering up a fabricated collage of images designed to attract its approval, they both talked about and interacted with the algorithm as if it were another subject.

This is a kind of reverse Turing Test. They were not fooled into thinking the algorithm was another human. Instead, they injected the algorithm with human characteristics, seeking to understand what was required of them by identifying the algorithm’s preferences and interacting with it as if it were another subject. They did this both directly (by feeding the platform information and watching to see what response was communicated back to them) and indirectly through the “followers” who acted as the algorithm’s proxies. Moreover, the importance they accorded to these algorithmic preferences was demonstrated by the kinds of identities my participants chose to perform in response to this interaction – such as Harry Potter Fan and Starbucks Consumer – even when these identities did not align with the selves and community they co-constructed offline and on private apps with friends and family.

Daniel (Reference Daniel2016) provides an entry point into exploring this gap between online and offline selves when he rejects the notion of the unitary generalized other and posits a multiplicity of generalized others that can better take into account experiences of social actors who are located in a multiplicity of communities. Certainly, his insight that “the self is constituted by its participation in multiple communities but responds to them creatively by enduring the moral perplexity of competing communal claims” (92) describes the difficulties my participants talked about as they sought to be responsive to the multiple perspectives of the various social actors in their lives, including family, friends, schoolmates and algorithms. But reconceiving these various audiences as a “plurality of generalized others” (Martin Reference Martin2005, 236), each of which reflects a self based on a specific set of expectations and aspirations shared by those inhabiting a particular community space (Daniel Reference Daniel2016, 99), makes it possible to conceptualize – and analyze – the algorithm as the algorithmic other, with its own commercially-driven values and goals, that shapes selves and interactions in networked spaces.

To date, the most comprehensive critique of the commercial values and goals that shape the online environment has been made by Zuboff (Reference Zuboff2019). She argues that algorithms act as a form of Big Brother or, in her words, “a Big Other that encodes the ‘otherized’ viewpoint of radical behaviorism as a pervasive presence” (20). From this perspective, the problem rests in the fact that the algorithmic other does not operate to reflect the self back to the human observer so the human can see its performances as an object, but instead quantifies the fruits of social interaction in order to (re)define the self as an object that can be nudged, manipulated and controlled (Lanzing Reference Lanzing2019; McQuillan Reference McQuillan2016; Steeves Reference Steeves2020). In this way, the algorithmic other serves to:

automate us … [and] finally strips away the illusion that the networked form has some kind of indigenous moral content – that being “connected” is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Instead, digital connection is now a brazen means to others’ commercial ends.

However, Zuboff’s critique is dissatisfying as it gives us no way to talk about agency: if we are fully automated, then we have been fully instrumentalized. It also fails to capture the rich social-interactive context in which my research participants sought to understand and respond to the algorithms that shape their public networked identities. Once again, I suggest that Mead can help us because he lets us unpack the instrumentalizing logic of the nudge without giving up on agency altogether.

Certainly, my participants’ experiences suggest that the kinds of identities that we can inhabit in networked spaces are constrained to those that conform to the commercial imperatives of the online ecology. However, the notion that a particular community constrains the kinds of identities we are able to experiment with is not new. As Daniel (Reference Daniel2016) notes:

It is crucial to appreciate that Mead’s generalized other is aggressive and intrusive, not passively composed by the self … This is clearer in the state of social participation, which requires the self to organize its actions so as to fit within a pattern of responsive relations whose expectations and aspirations precede this particular self’s participation … The generalized other should be understood as [this] pattern of responsive relations, which is oriented toward particular values and goals.

(100)

From this perspective, the types of identities that we see performed online in response to the algorithmic other concretize the values and goals embedded in online spaces by platform owners who mobilize algorithms for their own profit; and, by making those values visible, they open them up to debate. This makes the algorithm a key point of critique because it is a social agent that operates to shape and instrumentalize human interactions for the purposes of the people who mobilize it. From this perspective, to solve the kinds of polarization, harassment and misinformation we see in the networked community we must start by analyzing how algorithms create a fruitful environment for those kinds of outcomes. Unpacking how this works is the first step in holding those who use algorithms for their own profit to public account.

The sociality inherent in my participants’ interactions with the algorithmic other also lets us account for those small moments of agency reflected in the data. Mead posits that the self interacts with the generalized other in two capacities. The first is the social me that is performed for the generalized other and reflected back to the self so the self can gauge the success of its own performance. As noted in Section 8.1, the intrusiveness of the algorithmic other constrains the social me that can be performed in networked spaces. This is exemplified by my participants’ concern that their networked selves – artsy consumers of branded products – conform to the expectations of the algorithmic other even when they can’t draw a stick figure or don’t like coffee. However, the second capacity of the self is the I, the self that observes the social me as an object to itself and then decides what to project next.

Mead accordingly helps us break out of algorithmic determination by anchoring agency in the indeterminacy of the I as a future potentiality. This indeterminacy is constrained because it is concretized as the social me as soon as the I acts. But its emergent character reasserts the possibility of change and growth precisely because it can only act in the future. By situating action in a future tense of possibility, we retain the ability to resist, to choose something different, to be unpredictable, to know things about ourselves that have not yet come into being. In this sense, the algorithm can constrain us, but it cannot fully determine us because we continue to emerge.

Certainly, my research participants sought to exercise agency over their online interactions by revealing and hiding, making choices as part of an explicitly conscious process of seeing the objective self reflected back to them. They also wrested online space away from the algorithm on occasion. Birthday photos, for example, were consciously posted in order to break the algorithmic rules and to connect not with the abstract audience but with the humans in their lived social world, a social world which both interpolates with and extends beyond networked spaces. This demonstrates both a familiarity with and an ability to pull away from the algorithmic other in favour of the generalized other they experience in real world community.

8.3 Conclusion

I argue that my participants’ experiences demonstrate the paucity of identities available to networked humans who interact on sites that are shaped by the instrumental goals of profit and control. But those same experiences also underscore the rich sociality with which humans approach algorithmically driven ecologies, shaping their own interactions with the environment by injecting social meaning into the algorithm through their reading of the algorithmic other.

Certainly, the algorithmic positioning of human as object for its own instrumental purposes rather than for the social purposes of the self leaves us uneasy. Although we may feel reduced to an online self that is “compactified” into “a consumable package” and wonder if we can “know what it means to exist as something unsellable” (Fisher-Quann Reference Fisher-Quann2022), the point is we still wonder. Once again, agency exists as a potentiality in the moment of our own perusal of the self as object, in spite of – or perhaps because of – our interactions with the aggressive and intrusive nature of all generalized others (Daniel Reference Daniel2016).

Moreover, by conceiving of the algorithm as a social actor, we can extend the moment of human agency and bring the values and goals embedded in the algorithm out of the background and into the foreground of social interaction. From this perspective we can open up the algorithmic black box and read its instrumental intentions through the performances it reflects back to us because we recognize and interact with the algorithmic other as other. From this perspective, the algorithm only “masquerades as uncontested, consensus reasons, grounds, and warrants when they are anything but” (Martin Reference Martin2005, 251). Acknowledging the algorithm as an inherent part of online sociality helps us begin the hard task of collectively confronting the politics inherent in the algorithmic machine (McQuillan Reference McQuillan2016, 2).

Mead’s Carus Lecture in 1930 is prophetic in this regard. He said:

It seems to me that the extreme mathematization of recent science in which the reality of motion is reduced to equations in which change disappears in an identity, and in which space and time disappear in a four-dimensional continuum of indistinguishable events which is neither space nor time is a reflection of the treatment of time as passage without becoming.

Hildebrandt and Backhouse (Reference Hildebrandt and Backhouse2005) make the same point when they argue that the data that algorithms use to sort us are a representation, constructed from a particular point of view, of a messy, complicated, nuanced and undetermined person. They warn us that, if our discourse confuses the representation of an individual with the lived sense of self, we will fail to account for the importance of agency in the human experience. We will also be unable to unmask the values and goals of those humans who mobilize algorithms in the networked world for their own purposes.

9 The Birth of Code/Body

They ask me how did you get here? Can’t you see it on my body? The Lybian desert red with immigrant bodies, the Gulf of Aden bloated, the city of Rome with no jacket. … I spent days and nights in the stomach of the trucks; I did not come out the same. Sometimes it feels like someone else is wearing my body.Footnote 1

We are Black and the border guards hate us. Their computers hate us too.Footnote 2

This book contends with various ways of being human in the digital era, and this chapter intends to describe what it means to have a human body in our time. Much has been written about the colonial, racializing and gendered continuities of perceiving, sorting and discriminating bodies in a digital world. However, nothing like the digital has transformed the materiality of the body in its very flesh and bone. It seems redundant to say that the body is the prerequisite to being human, yet this superfluous fact questions how bodies function in in-between worlds: they flow in this world’s digital veins and yet rigidly represent decisive characteristics. They seem unreal, an amalgamation of data sometimes, while at other times fingerprints, iris scans and bone tests portray a cage, a trap, a body that betrays. This contrast is especially visible in uncertain spaces, where identity becomes crucial and only certain categories of humans can pass, such as borders and refugee camps. These spaces are not only obscuring the body while exposing it; they also exist in a complex mixture of national jurisdiction, international regulations and increasingly private “stakeholders” in immigration management. In addition to the severity of experiencing datafication of bodies in these spaces, the deliberate unruliness paves the way for these spaces to become technological testing grounds (Molnar Reference Molnar2020); for example, technologies developed for fleeing populations were used for contact tracing during the COVID-19 pandemic.

The relationship between body, datafication and surveillance has been scrutinized from the early days of digital transformation. Today’s most debated issues, such as algorithmic bias, were already warned about, and the ramifications of their discriminatory assumption for marginalized people were highlighted at the end of the 1980s (Gandy Reference Gandy1989). Similarly, the predictive character of aggregated data and the consequences of profiling were analysed (Marx Reference Marx1989). From these early engagements, many instances of showing how routinely technologies are used to govern, datafy and surveil the body developed (see, e.g. Bennett et al. Reference Bennett, Haggerty, Lyon and Steeves2014). Additionally, surveillance scholars discussed how the “boundary between the body itself and information about that body” is increasingly transforming (Van der Ploeg Reference Van der Ploeg, Ball, Haggerty and Lyon2012, 179). Building on this rich body of literature and personal experiences of immigration, exile and entrapment, this chapter revisits the body, being uncomfortable in/with/within it and yet being aware of its power to define if one is considered human enough to bear rights, feelings and existence. Similar to the chapter’s movement between boundaries of the material and virtual, the text also oscillates between academic thinking, autobiographical accounts, pictures and poesy; denoting the discomfort of being in a Code/Body.Footnote 3 In this chapter, poetic language remedies the absence of the performative to help with the linguistic distress for finding the right words to describe embodied feelings.

9.1 From Data Doubles to Embodiment

The scholarship on datafication, surveillance and digital transformation in the 2000s is infatuated with what can be called the demise of the material body. The speed of datafication and digital change lead to the idea that the surveillance society gives rise to disappearing bodies (Lyon Reference Lyon2001); the body is datafied and represented through data in a way that its materiality is obscured. Although such conceptualizations had been formerly discussed, especially by feminist and queer scholars, the liberatory nature of these feminist interpretations of cyborg bodies (Haraway Reference Haraway1985) and body assemblages were not transferred into these new understandings of datafied and surveilled body. In their influential essay on surveillant assemblages, Haggerty and Ericson compare the digital era with Rousseau’s proclamation, “man was born free, and he is everywhere in chains” by claiming that nowadays “humans are born free, and are immediately electronically monitored” (Haggerty and Ericson Reference Haggerty and Ericson2000, 611). The subjectivating effect of surveillance, then, is instantly interlinked with basic rights and the meaning of being human. The body, they argue, is positioned within this surveillance assemblage: it is “broken down into a series of discrete signifying flows” (Haggerty and Ericson Reference Haggerty and Ericson2000, 612). Contrary to the Foucauldian way of monitoring, the body needs to be fragmented to be observed. Fragments can be combined or re-combined into “data-doubles”: ones that “transcend human corporeality and reduce flesh to pure information” (Haggerty and Ericson Reference Haggerty and Ericson2000, 613).

The scholarly debates on bodies in the following two decades were centred around the transformation of the body “via practices of socio-technical intermediation” (French and Smith Reference French and Smith2016, 9). The body and its datafication, visualization, mediation and multiplication have become increasingly important. Research about sorting, profiling and reification of marginal identities (or race/gender/class/etc.), inclusion and exclusion proliferates and successfully demonstrates how bias, racism, oppression and discrimination are injected into digital lives. The data double revealed the concurrent processes of the body’s objectification – to transform its characteristics to data – and its subjectivation due to the socio-technical processes of datafication. As Zuboff assertively writes in The Age of Surveillance Capitalism, “the body is simply a set of coordinates in time and space where sensation and action are translated as data” (Zuboff Reference Zuboff2019, 203). In this reading of the body, behavioural surplus is the engine of surveillance capitalism and the body is only another source of data. However, recent technological advancements, especially in using bodily features for identification, have started to expand and reconfigure such accounts. More recent studies underline the body’s centrality, for example, in big data surveillance and manipulation of the “surveilled subject’s embodied practices” (Ball et al. Reference Ball, Di Domenico and Nunan2016) or critically examine how biometric technologies transform the relationships between the body and privacy (Epstein Reference Epstein2016). It is argued that data body is not only a change in how bodies are represented but there exists an ontological change: the materiality of the body “and our subjective forms of embodiment that are caught in this historical process of change” are transforming (Van der Ploeg Reference Van der Ploeg, Ball, Haggerty and Lyon2012, 179). This chapter contributes to these later discussions, where the body is not only central as the source of data but has its own agency as an actant in data assemblages.

9.2 The Birth of Code/Body

Following the global digital transformation, discussions on issues of privacy, data protection, algorithmic harm and similar have entered the academic discourse and public debate. The recent years have seen an increase in reporting about the Big Tech companies as emerging new actors in the international governance realm. However, only those events that entail geopolitical or socio-economic relations to the Western countries are deemed relevant. For example, the news of Chinese payment methods through facial recognition technology rapidly reached the Western media (Agence France-Presse 2019) but much less attention was paid to the internal politics of digitalization in the Global South or the new e-governance measures of international governance institutions. This reluctance is intensified when digital technologies target communities that are marginalized, stateless or economically disadvantaged. UNHCR’s use of iris scanning for refugee cash assistance illustrates a case of extreme datafication of the body against people in dire need of assistance with hardly any voice to consent to or refuse the imposed technologies. Ninety per cent of refugees in Jordan are registered through EyeCloud, “a secure and encrypted network connection that can be used to authenticate refugees against biometric data stored in the UNHCR database” (UNHCR 2019). Iris scanning is then used for payment in the camp’s supermarket to calculate and pay the wages for working inside the camp and it replaces any monetary transaction. The EyeCloud demonstrates how current datafication practices do not only stop at using the datafied body for identification and representation but actively integrate the body as a part of data machinery. This instrumentalized body simultaneously carries the gaze of surveillance and guards itself against itself. The consequences are painful: more than a decade ago, The Guardian newspaper reported that asylum seekers burn their fingertips on electric stoves or with acid to avoid the Dublin regulations and to avoid being returned to their point of arrival, usually in Greece or Italy (Grant and Domokos Reference Grant and Domokos2011). The betraying body, however, regenerates fingertips after 2 weeks. Similarly, in cases where the age assessment of a claimed minor proves inconclusive, the person could be referred for a bone density test of the wrist by x-ray in Malta (Asylum Information Database Reference Akbari2023) or a “dental x-ray of the third molar in the lower jaw and MRI of the lower growth plate of the femur bone” in Sweden (Rättsmedicinalverket 2022). In these cases, the immigration authorities believe the body’s truthfulness and the accuracy of medical sciences against mendacious and deceitful asylum seekers. Table 9.1 shows the extent of data categories gathered on visa, immigration or asylum applicants travelling to Europe. The body increasingly becomes a vehicle for knowing the real person behind the application.

Table 9.1Data categories stored in European immigration data banks.4
A table outlines the data categories stored in various European Union immigration data banks. The table is divided into types of personal data and lists. See long description.
Table 9.1Long description

The table outlines the data categories stored in various European Union immigration data banks. The table is divided into types of personal data and lists whether each data type is stored in the Schengen Information System S I S, the Visa Information System V I S, E U R O D A C, the entry or exit system E E S, E T I A S, and E C R I S-T C N. An x indicates that the data type is stored in the respective system, and x indicates conditional storage.

  1. 1. Alphanumeric Data

    • General information, namely name, age, gender, and nationality are stored in S I S, V I S, conditionally in E U R O D A C, E E S, E T I A S, and E C R I S-T C N.

    • Occupation is stored in V I S and E T I A S.

    • Education is stored in E T I A S.

    • Reason for travel is stored in V I S.

    • Information about funds for living expenses is stored in V I S.

    • Address, phone number, email address, and I P address are stored in E T I A S.

    • Information about past or present felonies is stored in S I S, E T I A S, and E C R I S-T C N.

    • Information about a recent stay in a war or conflict region is stored in E T I A S.

  • Biometric Data

    • Fingerprints are stored in S I S, V I S, E U R O D A C, E E S, and E C R I S-T C N.

    • Facial image is stored in S I S, V I S, E U R O D A C conditionally, E E S, and E C R I S-T C N.

    • Genetic data is stored in S I S.

The body acts as a trap. It transcends the current argumentations about profiling, sorting or bias based on personal data. What we witness is not just the datafication of the body but its function as ID card, debit card or labour hours registration sheet. If the cash machines, IDs and punched cards were technologies of yesterday, today these features are transferred to the body. The body becomes the payment system, the surveillance machine, the border. It is integrated into the datafied society’s infrastructure. It is platformized humanity. It is an integral material part of the bordering. On the Eastern European borders, heartbeat detectors, thermal-vision cameras and drones are used to unlawfully return the asylum seekers who manage to pass the border (Popoviciu Reference Popoviciu2021). The border is not a line on the map; it is everywhere (Balibar Reference Balibar2012, 78). The border is simultaneously a body on the move and a vehicle to keep out a body that does not belong. Consequently, the body/border can efficiently prevent flight since it entraps. When the Taliban got hold of biometric data banks that Western governments, the UN and the World Bank left behind in 2021, many activists and experts who collaborated with the coalition went into hiding because any border passage would put them in immediate danger of identification (Human Rights Watch 2022). They went into indefinite house arrest within the skeleton of their own bodies. This notion of corporeal entrapment or embodied surveillance resonates with the new conceptualization of how we understand space in the era of datafication. Coded space is defined as “spaces where software makes a difference to the transduction of spatiality, but the relationship between code and space is not mutually constituted” (Kitchin and Dodge Reference Kitchin and Dodge2011, 18). The digitalization of border security at airports or the use of digital technologies in the classroom are examples of coded space. In all these instances, when technology fails, there are still ways to finish the intended task: if the machine at a fully-automated high-tech airport does not recognize you, there is always an officer who can legitimize the authenticity of your ID. However, in the code/space the existence of space is dependent on the code and vice versa. If you are attending an online presentation and the technology fails, that would end your interaction. The code/space highlights the dyadic relationship between the two and their co-constructive nature (Kitchin and Dodge Reference Kitchin and Dodge2011). The dyadic relationship also explains the sense of corporeal entrapment. The datafied or coded body still exists, moves and functions. It has a mutual relationship with the data it produces but is not entirely constituted through it. We have our virtual profiles in social media platforms or wear smart watches but, as soon as we leave such spaces, we resign from being part of their universe. The Code/Body, however, is born in co-construction with the code and ceases existence if the code fails.

The Code/Body is an extension of code/space argumentation to the Foucauldian corporeal space, constantly subject to governmentality. Consequently, the surveillant assemblage introduced by Haggerty and Ericson (Reference Haggerty and Ericson2000) is not only about the production of data double. Their use of the Deleuzian conceptualization of the “body without organs” as an abstraction of the material body does not reflect the co-construction of the virtual and the material. Code/Body, on the other hand, offers a way to understand how the materiality of the body remains integral to our understanding of the body’s datafication while transcending the virtual–material dichotomy. The Code/Body carries manifold wounds: the bodily pain of being wounded – burnt fingertips, lungs full of water, starved behind border walls – and the hidden wounds of datafied exclusion. Although algorithms try to hide their bias, Code/Body reveals that race has a colour, ethnicity has an accent and gender could be “scientifically” examined. Underlining such material features of the body and their role in defining the Code/Body emphasizes “the co-constitution of humans and non-humans” (Müller Reference Müller2015, 27) and brings our attention to how things are held together and how datafied societies function. Extending the assemblage point of view, Actor Network Theory (ANT) provides a better empirical ground to understand the politics of the networks. It moves the focus more on outward associations and less on the intrinsic characteristics of a thing or its abstraction. The Code/Body highlights the co-constitution of these outward–inward associations and the body’s agency in changing the flows and associations within the assemblage. From this perspective, things have an open and contested character (Mol Reference Mol, Law and Hassard1999, 75), and the body is performative, meaning that its position within an assemblage can redefine its reality. Consequently, if one thing could be shaped by a variety of practices and networked connections, it can be configured in multiple and ambivalent ways. Lawful immigrants from internationally undesirable countries experience this multiple configuration throughout their border experiences. Visas to countries that have been visited before are rejected; border officers ask irrelevant questions to make the entry unpleasant or surprisingly act extra friendly. Automated passport check stations flicker a red light for double control but, on the next visit, go green. The assemblage changes and the integrated body in it changes accordingly. The Code/Body is, then, the ultimate device to realize and fulfil this fluidity. As a result, it is highly political how assemblages take shape, what actants dominate the flows, and which of the multiple realities of a thing are given preference. The ontological politics (Mol Reference Mol, Law and Hassard1999, 74) of Code/Body define the conditions of the possibility of being a human. Depending on their position in an assemblage, a person’s body could be reconfigured very differently. Heartbeats mean one thing on a smartwatch at a spinning class and another when sitting behind a lie detector machine at a border detention centre. Such politics of being are not only about positionality and “where we are” but also include temporality and “when we are.” I was held twice at the UK border detention centre despite having a valid visa. On both occasions, a sympathetic border officer took upon himself the time-consuming task of removing me from the “bad list.” It feels like a wonder that, within 40 minutes, a detained suspicious person, banished to a corner of the airport under the watchful eyes of a guard, turns into a legal traveller. Like a thing, the body can be understood as “a temporary moment in an endless process of assembling materials, a partial stabilisation and a fragile accomplishment that is always inexorably becoming something else, somewhere else” (Gregson et al. Reference Gregson, Crang, Ahamed, Akhter and Ferdous2010, 853). Code/Body, again, facilitates this temporary and mutable process of re-configuration. The more the body is datafied, the more physical it becomes.

9.3 A Moebius Body

The Code/Body blurs not only the virtual–material binaries but also nuances the politics of sorting by questioning the discourse of inclusion–exclusion through digital technologies, platforms and algorithms. Code/Body extends concepts such as the mediated body or the quantified self to propose an existential situation where the self stops to exist outside the code. This newly contended notion of self is the prerequisite of citizenship in the smart cities – the utopian dream of an urban life, which all societies are ambitiously moving towards. In future smart cities, we will not only witness behaviour nudging or the gamification of obedience (Botsman Reference Botsman2017); cities will be transformed into experimental labs, where the urban citizen is produced through measuring (Mattern Reference Mattern2016). On top of gathering data through sensors, following the movements and urban flows, and closely watching the bodies, the body becomes an instrument of belonging. To be in, it needs to be outed. The body needs to be thoroughly datafied to become integrated into the smart infrastructure of the city. Living in the Code/Body is a constant ride on a Moebius ring: the inside and outside depend on how one defines their situation or how their situation is defined for them. The Code/Body could belong to an urban assemblage at a specific time and lose all its association by a slight change in the code in the next second. In Figure 9.1, I have drawn a Moebius ring on the verdict of my complaint against UK immigration’s refusal of my tourist visa.

A legal document from the United Kingdom first-tier tribunal regarding an immigration appeal by Azadeh Akbari Kharazi against the entry clearance officer in Istanbul. See long description.

Figure 9.1 Mobius strip on Immigration Courts’ verdict

© Azadeh Akbari, Reference Akbari2023
Figure 9.1Long description

The legal document from the United Kingdom first-tier tribunal immigration and asylum chamber, dated April 21, 2015, details the appeal. Azadeh Akbari Kharazi against a visa decision by the entry clearance officer in Istanbul. The document includes official markings such as the crown emblem and the appeal number, which is partially redacted, and is identified as a decision and reasons notice. It outlines that the appellant is an Iranian citizen who applied for a United Kingdom general long-term visitor visa intending to visit the United Kingdom for a year with frequent visits planned over a ten-year period. The document mentions her employment as a Professor Assistant and her past travel history to the United Kingdom. The hearing took place at Columbus House in Newport and was handled as a paper case without legal representation. Some personal information is obscured with redaction and marker.

I had lived in London for 4 years and, after giving up my residency and returning to Iran, my tourist visa application was rejected. I was confused: I used to belong, work, live and actively participate in British society. Why was I suddenly out? Curiously, the judge had suggested since I can use technologies such as Skype to contact my friends in the UK, my human rights are not deemed to be violated. The code kept my body outside through its affordances to bring us closer. The movement between inside and outside makes bodily functions fuzzy; as if one can die while breathing and live forever, even after the heart stops. The following quote from a Somali refugee (now residing in Europe) initially shocks the reader: Did they drown?

Immediately after this thought, it seems his body has been revived from a mass of drowned refugees.

I was caught by the Lybian coastguard three times – first time from Qarabully; second time, Zawyia; third time, Zuwarna. And my fourth time, we drowned. And the fifth time, I made it to safety.

Another female Kurdish Iranian protestor during the Woman, Life, Freedom movement – a movement of Iranian women against compulsory Islamic dress code and discriminatory laws – reflects on how her body experiences the images she had previously seen on (social) media. She writes about how the physical and digital blend into each other and, despite the fear of pain instigated by watching social media videos, the real batons or pellets do not cause the expected physical pain.

I once received loud cheers when I escaped a scene of confrontation with security forces and ran into the crowd. … The next morning when I was looking over my bruises in the mirror, the details of the confrontation suddenly passed before my eyes. … I had not simply been beaten; I had also resisted and threw a few punches and kicks. My body had unconsciously performed those things I had seen other protestors do. I remembered the astonished faces of the guards trying to subdue me. My memory had just now, after a time interval, reached my body.

(L 2022)

The body’s agency leaks into the consciousness only after it has performed a task. In moments of upheaval, where the oppressed body stands up to its oppressors, it tries to distort its entrapment. Despite being surveilled, controlled and censored the body lives the unpermitted imaginary: it kicks the security forces, it runs and hides, it shows skin. It revolts against the sensory limitations imposed on it. In Figure 9.2, Woman, Life, Freedom protestors have covered a subway CCTV camera with female menstrual pads. Their female bodies withstand the gaze that controls, hides, oppresses and objectifies them. Next to the camera is a hashtag with an activist’s name: this time, virtual campaigns fuse into the material reality of the city. The Code/Body which is meant to be a part of the surveillant machinery through CCTV camera and facial recognition technologies, blinds the omnipresent eye with its most female bodily function: menstruation.

A grayscale photo features a sanitary pad stuck to a perforated ceiling panel with text in a foreign language written nearby.

Figure 9.2 Blocking CCTV cameras in Public transportation with menstruation pads.

Similar to silencing some bodily features, some bodies are marked as intangible, unrepresentable and unfathomable. Despite being embedded within different streams of data and code, our collective imagination still does not register the precarity of some bodies. At the time that artificial intelligence claims to further the limits of our creative powers by creating historical scenes or impossible fantasies, I inserted the poem by Warsan Shire at the beginning of this chapter in three popular AI-based text-to-image generation platforms. The results in Figure 9.3 show irrelevant pictures of mostly men depicting some keywords of the poem. The messiness of the poetry – and the poet’s feelings – does not translate into clear cut images. The machine fails to grasp even the theme of the poem. The wounded Code/Body remains hidden. The skin bears the pain of these wounds without bleeding and without any algorithm capturing its suffering. The person is caught in a body that can be datafied, but its emotions cannot be perceived.

Twelve grayscale panels include trucks, people in various environments, comic text, and artistic scenes. See long description.

Figure 9.3 AI-generated pictures created by Azadeh Akbari based on the poem “Conversations about Home (at the Deportation Centre)” by Warsan Shire, using three popular AI-based platforms

Figure 9.3Long description

The twelve grayscale panels include a variety of scenes arranged in a 3 by 4 grid. The left column features different trucks parked or driving through sparse desert-like landscapes, often accompanied by solitary male figures standing nearby. The middle column features close-up views of individuals, including a man gesturing during a conversation, an older man with striking facial features in a suit, and another man wearing glasses with a caption beneath him. The right column includes a person lying on a hillside, a densely packed urban scene with a comic-style speech bubble that reads My dream went this way, and a scene of three figures in light clothing, viewed from behind, standing together.

This chapter does not aim to investigate the political, economic or social reasons or structures that construct the Code/Body. The biopolitical and necropolitical, the Foucauldian corporeal space and its governmentality have been the subject of many scholarly debates. How surveillance and datafication affect these spaces is also not a new matter of discussion. However, it seems persistently new how uncomfortable the body feels for some people. The more some lives are exposed to precarity of intense datafication, some bodies are forced to give away their unscrupulous owner. Surveilling and constant measuring of the Code/Body assures that these lives remain precarious. Some bodies, it seems, could be easily deleted, like a line of dead code.

Tell the sea after the news of my death
that I wasn’t that thirsty to fill my lungs with his water,
that I am only an extremely exhausted man
who suffered all his life long from poverty
who worked all day long
to pursue a dignified life for his children
I wanted to flee like all poor people
I went to you, sea
to pull me out of the darkness
to take me to a brighter trajectory
You misunderstood me, sea
I told you that I wasn’t thirsty
Mahmoud Bakir, a young father from Gaza, wrote this poem in February 2021 before drowning on his way to reach Europe.

Footnotes

6 Machine Readable Humanity What’s the Problem?

The work benefitted from comments on the early draft presented at the Being Human workshop and from outstanding editorial guidance from Val Steeves and Beate Roessler. We gratefully acknowledge the MacArthur Foundation for supporting this work through a grant to the Digital Life Imitative, Cornell Tech.

1 We owe a debt of gratitude to Diyi Yang who patiently walked Nissenbaum through this setup. She should not, however, be blamed for mix-ups and errors.

2 We note but are unable to give proper credit to the significant body of published work on proxies.

3 We use these terms in quotation marks to avoid a presumption that machines are grasping or interpreting in the ways humans do.

4 In the words of Henry Ford, “You can have any color car you want so long as it’s black” (Alizon et al. Reference Alizon, Shooter and Simpson2008).

5 If users a and b both rate movies x and y similarly, and user a also likes movie z, then Cinematch would recommend movie z to user b.

6 We have explained elsewhere why Privacy Policies are not satisfactory solutions to these issues (Barocas and Nissenbaum Reference Barocas and Nissenbaum2014).

7 We have no insider view to the company’s internal practices.

8 Taught in the so-called Western tradition.

9 Including, for example, Web search.

7 Carebots Gender, Empire, and the Capacity to Dissent

1 According to Bhabha: “… colonial mimicry is the desire for a reformed recognizable other, as a subject of a difference that is almost the same, but not quite. Which is to say, that the discourse of mimicry is constructed around an ambivalence; in order to be effective, mimicry must continually produce its slippage, its excess, its difference. The authority of that mode of colonial discourse that I have called mimicry is therefore stricken by an indeterminacy: mimicry emerges as the representation of a difference that is itself a process of disavowal. Mimicry is, thus the sign of a double articulation; a complex strategy of reform, regulation and discipline, which appropriates the other as it visualizes power. Mimicry is also the sign of the inappropriate, however, a difference or recalcitrance which coheres the dominant strategic function of colonial power, intensifies surveillance, and poses an immanent threat to both ‘normalized’ knowledges and disciplinary powers” (Bhabha Reference Bhabha1994, 86).

2 The concept of subjectivity is related to the broader one of culture. Culture in current debates is considered a historically contingent repertoire that encompasses symbols, codes, values, systems of classification, and forms of perception as well as their related practices (Crane Reference Crane1994; Alexander and Seidman Reference Alexander and Seidman1990). Culture constitutes subjectivities and articulates the practices of social subjects and collectivities. The fundamental implication of a cultural analysis is that meanings are produced or constructed and not merely discovered “out there” in an essentialist or empirical sense (Hall Reference Hall1997). Both what was previously considered universal or natural are no longer viewed as essential facts of nature or positivist truths, but rather reveal themselves as social constructions and as part of specifically situated historical subjectivities.

3 My interest in this article is in humanoid adult-like robots both in appearance and emergent forms of consciousness/sentience in contrast to non-humanoid sociable robots that lack consciousness/sentience such as Paro (pet seal), Furby (hamster or owl), and AIBOs.

4 For a more extensive discussion of the relationship between visual culture, gender, race and technology, see Georas (Reference Georas and Marron2021).

5 Thus, Petersen concludes that ET may not be human, but he is a person. And the same applies to robots.

6 This phrase is from a glass coaster that pokes fun at 1950s ideals of feminine domesticity.

8 Networked Communities and the Algorithmic Other

3 The data was originally collected as part of the eQuality Project, a multi-year partnership of researchers, educators, policymakers, youth workers and youth funded by the Social Sciences and Humanities Council of Canada. For more information, see equalityproject.ca. The moment in time is also instructive, as it marks the shift away from early reports of enthusiasm for online self-exploration and connection (Environics 2000; Steeves Reference Steeves2005) to a more cautious view of online community as fraught with reputational risks (Bailey and Steeves Reference Bailey and Steeves2015; Steeves Reference Steeves2012) and therefore something that is safer to watch than to participate in (Steeves et al. Reference Steeves, McAleese and Brisson-Boivin2020).

4 We also suggested an alternative in case they were uncomfortable sharing a particular photo with us. In that case, they could submit a description of the photo instead. None of the participants opted for this alternative.

5 After collecting the photos, we conducted individual interviews between 60 and 90 minutes in length, using a semi-structured interview guide to explore their photo choices. Interviews were transcribed and subjected to a thematic qualitative analysis. The research protocols were approved by the research ethics boards at the University of Ottawa, the University of Toronto, Western University and George Mason University. For the original report, see Johnson et al. (Reference Johnson, Steeves, Shade and Foran2017).

9 The Birth of Code/Body

1 From the poem “Conversations about Home (at the Deportation Centre)” by Warsan Shire.

2 Excerpt from group discussion at later-evacuated L’Autre Caserne community in Brussels.

3 The combination of Code/Body is first used by Suneel Jethani (Reference Jethani2020) in their paper on self-tracking and mediating the body. The paper uses the similar notion of Code/Body or coded body to represent the hybrid or networked body. However, my chapter’s theoretical perspective differentiates between Code/Body and coded body and furthers the concept of Code/Body beyond self-quantification. This text is inspired by my lecture-performance at PACT Zollverein Performing Arts Theatre in Essen, Germany in 2023.

4 Table 9.1 was produced in 2022 in collaboration with Christopher Husemann, PhD student in political geography, University of Münster, and was later updated by the author.

References

References

Adler, Daniel A., Wang, Fei, Mohr, David C., Estrin, Deborah, Livesey, Cecilia, and Choudhury, Tanzeem. “A Call for Open Data to Develop Mental Health Digital Biomarkers.” BJPsych Open 8, no. 2 (2022): e58. https://doi.org/10.1192/bjo.2022.28.CrossRefGoogle Scholar
AdNauseam. 2024. Adnauseam.io.Google Scholar
Agre, Philip E. “Surveillance and Capture: Two Models of Privacy.” The Information Society 10, no. 2 (1994): 101127.CrossRefGoogle Scholar
Alizon, Fabrice, Shooter, Steven B., and Simpson, Timothy W.. “Henry Ford and the Model T: Lessons for Product Platforming and Mass Customization.” International Design Engineering Technical Conferences and Computers and Information in Engineering Conference 43291 (2008): 5966.Google Scholar
Apple. “About Face ID Advanced Technology,” 2024. https://support.apple.com/en-us/HT208108.Google Scholar
Arnall, Timo. “Exploring ‘Immaterials’: Mediating Design’s Invisible Materials.” International Journal of Design 8, no. 2 (2013): 101117.Google Scholar
Auerbach, David. “The Stupidity of Computers.” N+1, Machine Politics no. 13 (2012). www.nplusonemag.com/issue-13/essays/stupidity-of-computers/.Google Scholar
Barocas, Solon, and Nissenbaum, Helen. “Big Data’s End Run around Anonymity and Consent.” Privacy, Big Data, and the Public Good: Frameworks for Engagement 1 (2014): 4475.CrossRefGoogle Scholar
Biddle, Gibson. “A Brief History of Netflix Personalization.” Medium, June 1, 2021. https://gibsonbiddle.medium.com/a-brief-history-of-netflix-personalization-1f2debf010a1.Google Scholar
Bowker, Geoffrey, and Star, Susan Leigh. Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press, 2000.Google Scholar
Clarke, Roger. “Biometrics and Privacy.” 2001. www.rogerclarke.com/DV/biometrics.html.Google Scholar
Coravos, Andrea, Khozin, Sean, and Mandl, Kenneth D.. “Author Correction: Developing and Adopting Safe and Effective Digital Biomarkers to Improve Patient Outcomes.” NPJ Digital Medicine 2 (2019): 15. https://doi.org/10.1038/s41746-019-0090-4.Google Scholar
Daniore, Paola, Nittas, Vasileios, Haag, Christina, Bernard, Jürgen, Gonzenbach, Roman, and von Wyl, Viktor. “From Wearable Sensor Data to Digital Biomarker Development: Ten Lessons Learned and a Framework Proposal.” npj Digital Medicine 7, no. 1 (2024): 161.CrossRefGoogle Scholar
Duan, Yiqun, Zhou, Jinzhao, Wang, Zhen, Wang, Yu-Kai, and Lin, Chin-Teng. “Dewave: Discrete EEG Waves Encoding for Brain Dynamics to Text Translation.” arXiv preprint arXiv:2309.14030 (2023).Google Scholar
Esteva, Andre, Kuprel, Brett, Novoa, Roberto A., Ko, Justin, Swetter, Susan M., HelBlau, en M., and Thrun, Sebastian. “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks.” Nature 542, no. 7639 (February 2, 2017): 115118. https://doi.org/10.1038/nature21056.CrossRefGoogle Scholar
Farahany, Nita A. The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. St. Martin’s Press, 2023.Google Scholar
Flaten, Hania K., St Claire, Chelsea, Schlager, Emma, Dunnick, Cory A., and Dellavalle, Robert P.. “Growth of Mobile Applications in Dermatology: 2017 Update.” Dermatology Online Journal 24, no. 2 (2018). https://doi.org/10.5070/D3242038180.CrossRefGoogle Scholar
Fleckenstein, W. O. “Development of the Touch-Tone® Telephone.” Research Management, vol. 13, no. 1 (1970): 1325.CrossRefGoogle Scholar
Fortune Business Insights. “Facial Recognition Market Rising at a CAGR of 14.8% to Reach USD 12.92 Billion by 2027.” 2022. www.globenewswire.com/news-release/2022/02/08/2380458/0/en/Facial-Recognition-Market-Rising-at-a-CAGR-of-14-8-to-Reach-USD-12-92-Billion-by-2027.html.Google Scholar
Harari, Gabriella M., and Gosling, Samuel D. “Understanding Behaviours in Context Using Mobile Sensing.” Nature Reviews Psychology 2, no. 12 (2023): 767779. https://doi.org/10.1038/s44159-023-00235-3.CrossRefGoogle Scholar
Harwell, Drew. “Facial Recognition Firm Clearview AI Tells Investors It’s Seeking Massive Expansion beyond Law Enforcement.” The Washington Post, February 16, 2022. www.washingtonpost.com/technology/2022/02/16/clearview-expansion-facial-recognition/.Google Scholar
Heidegger, Martin. The Question Concerning Technology, translated by William Lovitt. New York: Harper & Row, 1977. Originally published in German as Die Frage nach der Technik, 1954.Google Scholar
Holt, Jennifer, and Palm, Michael. “More Than a Number: The Telephone and the History of Digital Identification.” European Journal of Cultural Studies 24, no. 4 (2021): 916934.CrossRefGoogle Scholar
Howe, Daniel, and Nissenbaum, Helen. “Engineering Privacy and Protest: A Case Study of AdNauseam.” International Workshop on Privacy Engineering, San Jose, CA, May 25, 2017.Google Scholar
IBM. “IBM Product Announcement 7770.” (1964). https://ed-thelen.org/comp-hist/IBM-ProdAnn/7770.pdf.Google Scholar
Introna, Lucas D., and Nissenbaum, Helen. “Shaping the Web: Why the Politics of Search Engines Matters.” The Information Society 16, no. 3 (2000): 169185.Google Scholar
Ishii, Hiroshi, and Ullmer, Brygg. “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms.” Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, March 22–27, 1997.CrossRefGoogle Scholar
Kant, Immanuel. Grounding for the Metaphysics of Morals, translated by James W. Ellington. Cambridge, MA: Hackett Publishing, 1993.Google Scholar
Lau, Pin Lean. “Facial Recognition in Schools: Here Are the Risks to Children.” The Conversation, October 27, 2021. https://theconversation.com/facial-recognition-in-schools-here-are-the-risks-to-children-170341.CrossRefGoogle Scholar
Mackenzie, Catriona, and Stoljar, Natalie, eds. Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self. New York: Oxford University Press, 2000.CrossRefGoogle Scholar
“Netflix Recommendations: Beyond the 5 Stars (Part 1).” Medium, April 6, 2012. https://netflixtechblog.com/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429.Google Scholar
Nielsen, Jakob, and Loranger, Hoa. Prioritizing Web Usability. London: Pearson Education, 2006.Google Scholar
Nissenbaum, Helen. “Contextual Integrity Up and Down the Data Food Chain.” Theoretical Inquiries in Law 20, no. 1 (2019): 221256.CrossRefGoogle Scholar
Nissenbaum, Helen. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Redwood City, CA: Stanford University Press, 2009.CrossRefGoogle Scholar
Nissenbaum, Helen, and Howe, DanielTrackmenot: Resisting Surveillance in Web Search.” In Lessons from the Identity Trail: Anonymity, Privacy, and Identity in a Networked Society, edited by Kerr, I., Lucock, C., and Steeves, V., 417440. Oxford: Oxford University Press, 2009.Google Scholar
Noble, Safiya Umoja. “Algorithms of Oppression: How Search Engines Reinforce Racism.” In Algorithms of Oppression. New York: New York University Press, 2018.CrossRefGoogle Scholar
Norman, Donald. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013.Google Scholar
Plummer, Libby. “This Is How Netflix’s Top Secret Recommendation System Works.” Wired, August 22, 2017. www.wired.co.uk/article/how-do-netflixs-algorithms-work-machine-learning-helps-to-predict-what-viewers-will-like.Google Scholar
Rahman, Was. “The Netflix Prize: How Even AI Leaders Can Trip Up.” Medium, January 11, 2020. https://towardsdatascience.com/the-netflix-prize-how-even-ai-leaders-can-trip-up-5c1f38e95c9f.Google Scholar
Roessler, Beate. “Autonomy: An Essay on the Life Well-Lived.” Hoboken, NJ: John Wiley & Sons, 2021.Google Scholar
Scott, James C. Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven, CT: Yale University Press, 2020.Google Scholar
Shneiderman, Ben. “Creativity Support Tools: A Grand Challenge for HCI Researchers.” In Engineering the User Interface, edited by M. Redondo et al., 19. London: Springer, 2009.Google Scholar
Singh, Rita. Profiling Humans from Their Voice. London: Springer, 2019.CrossRefGoogle Scholar
Singletary, Michelle. “Despite Privacy Concerns, ID.me Nearly Doubled the Number of People Able to Create an IRS Account.” The Washington Post, February 25, 2022. www.washingtonpost.com/business/2022/02/25/irs-idme-account-success-rate/.Google Scholar
Smith, Brad, and Browne, Carol Ann. Tools and Weapons: The Promise and the Peril of the Digital Age. New York: Penguin, 2021.Google Scholar
Spool, Jared M. “What Makes a Design Seem ‘Intuitive’?” UX Articles by Center Centre (blog), January 10, 2005. https://articles.centercentre.com/design_intuitive/.Google Scholar
Stark, Luke. “Facial Recognition Is the Plutonium of AI.” XRDS: Crossroads, The ACM Magazine for Students 25, no. 3 (2019): 5055.CrossRefGoogle Scholar
Tae, Ki Hyun, and Whang, Steven Euijong. “Slice Tuner: A Selective Data Acquisition Framework for Accurate and Fair Machine Learning Models.” Proceedings of the 2021 International Conference on Management of Data, Virtual Event China, June 20–25, 2021, 1771–1783.CrossRefGoogle Scholar
Tang, Jerry, LeBel, Amanda, Jain, Shailee, and Huth, Alexander G.. “Semantic Reconstruction of Continuous Language from Non-invasive Brain Recordings.” Nature Neuroscience 26, no. 5 (2023): 858866.CrossRefGoogle Scholar
Towey, Hannah. “The Retail Stores You Probably Shop at that Use Facial-Recognition Technology.” Business Insider, July 19, 2021. www.businessinsider.com/retail-stores-that-use-facial-recognition-technology-macys-2021-7.Google Scholar
TrackMeNot. 2024. TrackMeNot.com.Google Scholar
Turow, Joseph. “Shhhh, They’re Listening – Inside the Coming Voice-Profiling Revolution.” The Conversation, April 28, 2021a. http://theconversation.com/shhhh-theyre-listening-inside-the-coming-voice-profiling-revolution-158921.CrossRefGoogle Scholar
Turow, Joseph. The Voice Catchers: How Marketers Listen in to Exploit Your Feelings, Your Privacy, and Your Wallet. Yale University Press, 2021b.Google Scholar
Yu, Harlan, and Robinson, David. “The New Ambiguity of ‘Open Government’.” UCLA Law Review Discourse 59 (2012): 180208.Google Scholar
Zanger-Tishler, Michael, Nyarko, Julian, and Goel, Sharad. “Risk Scores, Label Bias, and Everything but the Kitchen Sink.” Science Advances 10, no. 13 (2024): eadi8411. https://doi.org/10.1126/sciadv.adi8411.CrossRefGoogle Scholar

References

Alexander, Jeffrey C., and Seidman, Steven, eds. Culture and Society: Contemporary Debates. Cambridge: Cambridge University Press, 1990.Google Scholar
Balsamo, Anne. Technologies of the Gendered Body: Reading Cyborg Women. Durham, NC: Duke University Press, 1997.Google Scholar
Barnet, Belinda. “Pack-Rat or Amnesiac? Memory, the Archive and the Internet.” Continuum: Journal of Media & Cultural Studies 15, no. 2 (2001): 217231.CrossRefGoogle Scholar
Bhabha, Homi K. The Location of Culture. New York: Routledge, 1994.Google Scholar
Bourdieu, Pierre, and Wacquant, Loïc. “Symbolic Violence.” In Violence in War and Peace: An Anthology, edited by Scheper-Hughes, Nancy and Bourgois, Philippe, 272275. Oxford: Blackwell, 2004.Google Scholar
Butler, Judith. Bodies That Matter: On the Discursive Limits of “Sex”. New York: Routledge, 1993.Google Scholar
Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge, 1990.Google Scholar
Crane, Diane, ed. The Sociology of Culture: Emerging Theoretical Perspectives. Oxford: Blackwell, 1994.Google Scholar
Georas, Chloé S.From Sexual Explicitness to Invisibility in Resistance Art: Coloniality, Rape Culture and Technology.” In Misogyny across Global Media, edited by Marron, Maria B., 2341. Lanham, MD: Lexington Books, 2021.CrossRefGoogle Scholar
Hall, Stuart, ed. Representation: Cultural Representations and Signifying Practices. Glasgow: Sage, 1997.Google Scholar
IFR. “World Robotics 2021: Service Robots Report Released.” IFR International Federation of Robotics, accessed February 15, 2022b. https://ifr.org/ifr-press-releases/news/service-robots-hit-double-digit-growth-worldwide.Google Scholar
Jones, Amelia, ed. The Feminism and Visual Culture Reader, 2nd ed. New York: Routledge, 2010.Google Scholar
Latour, Bruno. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” In Shaping Technology/Building Society: Studies in Sociotechnical Change, edited by Bijker, Wiebe E. and Law, John, 225259. Cambridge, MA: MIT Press, 1992.Google Scholar
Levy, David. “The Ethical Treatment of Artificially Conscious Robots.” International Journal of Social Robotics 1 (2009): 209216. https://doi.org/10.1007/s12369-009-0022-6.CrossRefGoogle Scholar
Mirzoeff, Nicholas. “The Subject of Visual Culture.” In Visual Culture Reader, 2nd ed., edited by Mirzoeff, Nicholas, 323. New York: Routledge, 2002.Google Scholar
Mordor Intelligence. “Service Robotics Market | 2024–29 | Industry Share, Size, Growth: Mordor Intelligence.” Mordor Intelligence, accessed April 3, 2024. www.mordorintelligence.com/industry-reports/service-robotics-market.Google Scholar
Mori, Masahiro. “The Uncanny Valley,” translated by Karl F. MacDorman and Norri Kageki. IEEE Robotics & Automation Magazine 19, no. 2 (2012 [1970]): 98100. www.researchgate.net/publication/254060168_The_Uncanny_Valley_From_the_Field.CrossRefGoogle Scholar
Petersen, Stephen. “Designing People to Serve.” In Robot Ethics: The Ethical and Social Implications of Robotics, edited by Lin, Patrick, Abney, Keith, and Bekey, George A., Kindle ed., 283298. Cambridge, MA: MIT Press, 2011.Google Scholar
Petersen, Stephen. “The Ethics of Robot Servitude.” Journal of Experimental & Theoretical Artificial Intelligence 19, no. 1 (March 2007): 4354. https://philarchive.org/archive/PETTEO.CrossRefGoogle Scholar
Robertson, Jennifer. “Gendering Humanoid Robots: Robo-Sexism in Japan.” Body & Society 16, no. 2 (2010): 136. https://doi.org/10.1177/1357034X1036476.CrossRefGoogle Scholar
Rogoff, Irit. “Studying Visual Culture.” In Visual Culture Reader, 2nd ed., edited by Mirzoeff, Nicholas, 2436. New York: Routledge, 2002.Google Scholar
Torrance, Steve. “Ethics and Consciousness in Artificial Agents.” Artificial Intelligence & Society 22, no. 4 (2008): 495521. https://philpapers.org/rec/TOREAC-2.Google Scholar
Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011.Google Scholar
Turkle, Sherry. “Interview: MIT’s Dr. Sherry Turkle Discusses Robotic Companionship.” National Public Radio, May 11, 2001. www.proquest.com/docview/190010111?&sourcetype=Other%20Sources.Google Scholar
Turkle, Sherry. “Relational Artifacts with Children and Elders: The Complexities of Cybercompanionship.” Connection Science 18, no. 4 (2006): 347361. https://sherryturkle.mit.edu/sites/default/files/images/Relational%20Artifacts.pdfCrossRefGoogle Scholar
Vallor, Shannon. “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century.” Philosophy & Technology 24 (2011): 251268. https://link.springer.com/article/10.1007/s13347-011-0015-x.CrossRefGoogle Scholar
Walker, Mark. “Mary Poppins 3000s of the World Unite: A Moral Paradox in the Creation of Artificial Intelligence.” Institute for Ethics & Emerging Technologies, 2007. www.researchgate.net/publication/281477782_A_moral_paradox_in_the_creation_of_artificial_intelligence_Mary_popping_3000s_of_the_world_unite.Google Scholar
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of Technology. Chicago: The University of Chicago Press, 1986.Google Scholar

References

Aboulafia, Mitchell. “George Herbert Mead and the Unity of the Self.” European Journal of Pragmatism and American Philosophy VIII, no. 1 (2016). https://journals.openedition.org/ejpap/465.CrossRefGoogle Scholar
Andrejevic, Mark. iSpy: Surveillance and Power in the Interactive Era. Lawrence: University Press of Kansas, 2009.Google Scholar
Bailey, Jane, and Steeves, Valerie, eds. eGirls, eCitizens. Ottawa: University of Ottawa Press, 2015.Google Scholar
Benhabib, Seyla. Situating the Self: Gender, Community, and Postmodernism in Contemporary Ethics. New York: Routledge, 1992.Google Scholar
Cohen, Julie E. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. New Haven, CT: Yale University Press, 2012.Google Scholar
Daniel, Joshua. “Richard Niebuhr’s Reading of George Herbert Mead: Correcting, Completing, and Looking Ahead.” Journal of Religious Ethics 44, no. 1 (2016): 92115.CrossRefGoogle Scholar
Eichenhofer, Johannes, and Gusy, Christoph. “Courts, Privacy and Data Protection in Germany: Informational Self-determination in the Digital Environment.” In Courts, Privacy and Data Protection in the Digital Environment, edited by Brkan, Maja and Psychogiopoulou, Evangelia, 101119. Cheltenham: Edward Edgar Publishing, 2017.Google Scholar
Ellis, David, Oldridge, Rachel, and Vasconcelos, Ana. “Community and Virtual Community.” Annual Review of Information Science and Technology 38, no. 1 (2004): 145186.CrossRefGoogle Scholar
Environics. Young Canadians in a Wired World, Phase 1: Focus Groups with Parents and Children. Ottawa: MediaSmarts, 2000.Google Scholar
Fisher-Quann, Rayne. “Standing on the Shoulders of Complex Female Characters: Am I in my Fleabag Era or Is my Fleabag Era in Me?” Internet Princess, February 6, 2022. https://internetprincess.substack.com/p/standing-on-the-shoulders-of-complex.Google Scholar
Graham, Mark. “Geography/Internet: Ethereal Alternate Dimensions of Cyberspace of Grounded Augmented Realities?” The Geographic Journal 179, no. 2 (2013): 177182.CrossRefGoogle Scholar
Haythornthwaite, Caroline. “Social Networks and Online Community.” In Oxford Handbook of Internet Psychology, edited by Joinson, Adam, McKenna, Katelyn, Postmes, Tom, and Reips, Ulf-Dietrich, 121134. New York: Oxford University Press, 2007.Google Scholar
Hildebrandt, Mireille, and Backhouse, James, eds. D7.2: Descriptive Analysis and Inventory of Profiling Practices. European Union: FIDIS Network of Excellence, 2005.Google Scholar
Johnson, Matthew, Steeves, Valerie, Shade, Leslie, and Foran, Grace. To Share or Not to Share: How Teens Make Privacy Decisions about Photos on Social Media. Ottawa: MediaSmarts, 2017.Google ScholarPubMed
Koopman, Colin. “The History and Critique of Modernity: Dewey with Foucault against Weber.” In John Dewey and Continental Philosophy, edited by Fairfield, Paul, 194218. Carbondale: Southern Illinois University Press, 2010.Google Scholar
Koskela, Hille. “The Other Side of Surveillance: Webcams, Power and Agency.” In Theorizing Surveillance: The Panopticon and Beyond, edited by Lyon, David, 163181. London: Routledge, 2006.Google Scholar
Lanzing, Marjolein. “‘Strongly Recommended’: Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies.” Philosophy & Technology 32 (2019): 549568.CrossRefGoogle Scholar
Liu, Chen. “Imag(in)ing Place: Reframing Photography Practices and Affective Social Media Platforms.” Geoforum 129 (2022): 172180.CrossRefGoogle Scholar
Los, Maria. “Looking into the Future: Surveillance, Globalization and the Totalitarian Potential.” In Theorizing Surveillance: The Panopticon and Beyond, edited by Lyon, David, 6994. London: Routledge, 2006.Google Scholar
Mackenzie, Adrian. “The Production of Prediction: What Does Machine Learning Want?” European Journal of Cultural Studies 18, no. 4–5 (2015): 429445.CrossRefGoogle Scholar
Mackenzie, Catriona. “Relational Autonomy: State of the Art Debate.” In Spinoza and Relational Autonomy: Being with Others, edited by Armstrong, Aurelia, Green, Keith, and Sangiacomo, Andrea, 1032. Edinburgh: Edinburgh University Press, 2019.CrossRefGoogle Scholar
Martin, Jack. “Perspectival Selves in Interaction with Others: Re-reading G.H. Mead’s Social Psychology.” Journal for the Theory of Social Behaviour 35, no. 3 (2005): 231253.CrossRefGoogle Scholar
McQuillan, Dan. “Algorithmic Paranoia and the Convivial Alternative.” Big Data & Society 3, no. 2 (2016): 112.CrossRefGoogle Scholar
Mead, George Herbert. Mind, Self, and Society from the Standpoint of a Social Behaviorist. Chicago: University of Chicago Press, 1934.Google Scholar
Mead, George Herbert. The Philosophy of the Present, edited by Murphy, Arthur E.. LaSalle, IL: Open Court, 1932.Google Scholar
Nafisi, Azar. Reading Lolita in Tehran. New York: Random House, 2003.Google Scholar
Pridmore, Jason, and Wang, Yijing. “Prompting Spiritual Practices through Christian Faith Applications: Self-Paternalism and the Surveillance of the Soul.” Surveillance & Society 16, no. 4 (2018): 502516.CrossRefGoogle Scholar
Prus, Robert. “Generic Social Processes and the Study of Human Experiences.” In Symbolic Interaction: An Introduction to Social Psychology, edited by Herman, Nancy J. and Reynolds, Larry T., 436458. Maryland: Rowman & Littlefield, 1994.Google Scholar
Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000.Google Scholar
Roessler, Beate. Autonomy: An Essay on the Life Well-Lived. Cambridge: Polity Press, 2021.Google Scholar
Steeves, Valerie. “A Dialogic Analysis of Hello Barbie’s Conversations with Children.” Big Data & Society 7, no. 1 (2020): 112.CrossRefGoogle Scholar
Steeves, Valerie. “Now You See Me: Privacy, Technology and Autonomy in the Digital Age.” In Current Issues and Controversies in Human Rights, edited by DiGiacomo, Gordon, 461482. Toronto: University of Toronto Press, 2016.CrossRefGoogle Scholar
Steeves, Valerie. “Privacy, Sociality and the Failure of Regulation: Lessons Learned from Young Canadians’ Online Experiences.” In Social Dimensions of Privacy: Interdisciplinary Perspectives, edited by Roessler, Beate and Mokrosinska, Dorota, 244260. Cambridge: Cambridge University Press, 2015.CrossRefGoogle Scholar
Steeves, Valerie. Young Canadians in a Wired World, Phase II: Trends and Recommendations. Ottawa: MediaSmarts, 2005.Google Scholar
Steeves, Valerie. Young Canadians in a Wired World, Phase III: Talking to Youth and Parents about Life Online. Ottawa: MediaSmarts, 2012.Google Scholar
Steeves, Valerie, McAleese, Samantha, and Brisson-Boivin, Kara. Young Canadians in a Wireless World, Phase IV: Talking to Youth and Parents about Online Resiliency. Ottawa: MediaSmarts, 2020.Google Scholar
Thatcher, Jim, O’Sullivan, David, and Mahmoudi, Dillon. “Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data.” Environment and Planning D: Society and Space 34, no. 6 (2016): 9901006.CrossRefGoogle Scholar
Zuboff, Shoshana. “Surveillance Capitalism and the Challenge of Collective Action.” New Labor Forum 28, no. 1 (2019): 1029.CrossRefGoogle Scholar

References

Agence France-Presse. “Smile-to-Pay: Chinese Shoppers Turn to Facial Payment Technology.” The Guardian, September 4, 2019. www.theguardian.com/world/2019/sep/04/smile-to-pay-chinese-shoppers-turn-to-facial-payment-technology.Google Scholar
Akbari, Azadeh. “Iran: Digital spaces of protest and control.” European Center for Not-for-Profit Law (2023). https://ecnl.org/publications/iran-digital-spaces-protest-and-control.Google Scholar
Asylum Information Database. “Identification: Malta” Asylum Information Database | European Council on Refugees and Exiles (blog), April 27, 2023. https://asylumineurope.org/reports/country/malta/asylum-procedure/guarantees-vulnerable-groups-asylum-seekers/identification/.Google Scholar
Balibar, Étienne. Politics and the Other Scene, translated by Christine Jones, James Swenson, and Chris Turner. New York: Verso, 2012.Google Scholar
Ball, Kirstie, Di Domenico, MariaLaura, and Nunan, Daniel. “Big Data Surveillance and the Body-Subject.” Body & Society 22, no. 2 (2016): 5881. https://doi.org/10.1177/1357034X15624973.CrossRefGoogle Scholar
Bennett, Colin J., Haggerty, Kevin D., Lyon, David, and Steeves, Valerie, eds. Transparent Lives: Surveillance in Canada. Athabasca, Alberta: Athabasca University Press, 2014.CrossRefGoogle Scholar
Botsman, Rachel. 2017. “Big Data Meets Big Brother as China Moves to Rate Its Citizens.” Wired UK, October 21, 2017. www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion.Google Scholar
Epstein, Charlotte. “Surveillance, Privacy and the Making of the Modern Subject: Habeas What Kind of Corpus?” Body & Society 22, no. 2 (2016): 2857. https://doi.org/10.1177/1357034X15625339.CrossRefGoogle Scholar
French, Martin, and Smith, Gavin J. D.. “Surveillance and Embodiment: Dispositifs of Capture.” Body & Society 22, no. 2 (2016): 327. https://doi.org/10.1177/1357034X16643169.CrossRefGoogle Scholar
Gandy, Oscar H., Jr. “The Surveillance Society: Information Technology and Bureaucratic Social Control.” Journal of Communication 39, no. 3 (1989): 6176. https://doi.org/10.1111/j.1460-2466.1989.tb01040.x.CrossRefGoogle Scholar
Grant, Harriet, and Domokos, John. “Dublin Regulation Leaves Asylum Seekers with Their Fingers Burnt.” The Guardian, October 7, 2011. www.theguardian.com/world/2011/oct/07/dublin-regulation-european-asylum-seekers.Google Scholar
Gregson, N., Crang, M., Ahamed, F., Akhter, N., and Ferdous, R.. “Following Things of Rubbish Value: End-of-Life Ships, ‘Chock-Chocky’ Furniture and the Bangladeshi Middle Class Consumer.” Geoforum 41, no. 6 (2010): 846854. https://doi.org/10.1016/j.geoforum.2010.05.007.CrossRefGoogle Scholar
Haggerty, Kevin D., and Ericson, Richard V.. “The Surveillant Assemblage.” The British Journal of Sociology 51 no. 4 (2000): 605622. https://doi.org/10.1080/00071310020015280.CrossRefGoogle Scholar
Haraway, Donna J. “A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism for the 1980s.” Socialist Review 15, no. 2 (1985): 65107.Google Scholar
Hayden, Sally. My Fourth Time, We Drowned. New York: Melville House, 2022.Google Scholar
Human Rights Watch. “New Evidence That Biometric Data Systems Imperil Afghans.” Human Rights Watch, March 30, 2022. www.hrw.org/news/2022/03/30/new-evidence-biometric-data-systems-imperil-afghans.Google Scholar
Jethani, Suneel. “Mediating the Body: Technology, Politics and Epistemologies of Self.” Communication, Politics & Culture 47, no. 3 (2020): 3443. https://doi.org/10.3316/informit.113702521033267.Google Scholar
Kitchin, Rob, and Dodge, Martin. Code/Space: Software and Everyday Life. Cambridge, MA: The MIT Press, 2011. https://doi.org/10.7551/mitpress/9780262042482.001.0001.CrossRefGoogle Scholar
L. “Figuring a Women’s Revolution: Bodies Interacting with Their Images.” Jadaliyya, October 5, 2022. www.jadaliyya.com/Details/44479.Google Scholar
Lyon, David. Surveillance Society: Monitoring in Everyday Life. Buckingham: Open University Press, 2001.Google Scholar
Marx, Gary T. Undercover: Police Surveillance in America. Oakland: University of California Press, 1989.Google Scholar
Mattern, Shannon. “Instrumental City: The View from Hudson Yards, circa 2019.” Places Journal, April 2016. https://doi.org/10.22269/160426.CrossRefGoogle Scholar
Mol, Annemarie. “Ontological Politics. A Word and Some Questions.” In Actor Network Theory and After, edited by Law, John and Hassard, John, 7489. Oxford: Blackwell Publishing, 1999.Google Scholar
Molnar, Petra. “Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up.” EDRI, 2020. https://edri.org/wp-content/uploads/2020/11/Technological-Testing-Grounds.pdf.Google Scholar
Müller, Martin. “Assemblages and Actor-Networks: Rethinking Socio-Material Power, Politics and Space.” Geography Compass 9, no. 1 (2015): 2741. https://doi.org/10.1111/gec3.12192.CrossRefGoogle Scholar
Popoviciu, Andrei. “‘They Can See Us in the Dark’: Migrants Grapple with Hi-Tech Fortress EU.” The Guardian, March 26, 2021. www.theguardian.com/global-development/2021/mar/26/eu-borders-migrants-hitech-surveillance-asylum-seekers.Google Scholar
Rättsmedicinalverket. “Medical Age Assessment.” Rättsmedicinalverket, October 18, 2022. www.rmv.se/medical-age-assessment/.Google Scholar
Shire, Warsan. Teaching My Mother How to Give Birth. UK: Mouthmark series, 2011.Google Scholar
UNHCR. “EYECLOUD© Enhancing the Delivery of Refugee Assistance.” UNHCR Operational Data Portal, 2019. https://data2.unhcr.org/en/documents/details/68208.Google Scholar
Van der Ploeg, Irma. “The Body as Data in the Age of Information.” In Routledge Handbook of Surveillance Studies, edited by Ball, Kirstie, Haggerty, Kevin, and Lyon, David, 176184. London: Routledge, 2012.Google Scholar
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs, 2019.Google Scholar
Figure 0

Table 9.1 Data categories stored in European immigration data banks.4Table 9.1 long description.

Figure 1

Figure 9.1 Mobius strip on Immigration Courts’ verdictFigure 9.1 long description.

© Azadeh Akbari, 2023
Figure 2

Figure 9.2 Blocking CCTV cameras in Public transportation with menstruation pads.

(Akbari 2023, 24)
Figure 3

Figure 9.3 AI-generated pictures created by Azadeh Akbari based on the poem “Conversations about Home (at the Deportation Centre)” by Warsan Shire, using three popular AI-based platformsFigure 9.3 long description.

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×