Skip to main content Accessibility help
×
Hostname: page-component-857557d7f7-ksgrx Total loading time: 0 Render date: 2025-12-01T09:04:33.177Z Has data issue: false hasContentIssue false

8 - Networked Communities and the Algorithmic Other

from Part II - Living the Digital Life

Published online by Cambridge University Press:  11 November 2025

Beate Roessler
Affiliation:
University of Amsterdam
Valerie Steeves
Affiliation:
University of Ottawa

Summary

Steeves revisits empirical data about young people’s experiences on social media to provide a snapshot of what happens to the interaction between self and others when community is organized algorithmically. She then uses Meadian notions of sociality to offer a theoretical framing that can explain the meaning of self, other, and community found in the data. She argues that young people interact with algorithms as if they were another social actor, and reflexively examine their own performances from the perspective of the algorithm as a specific form of generalized other. In doing so, they pay less attention to the other people they encounter in online spaces and instead orient themselves to action by emulating the values and goals of this algorithmic other. Their performances can accordingly be read as a concretization of these values and goals, making visible the agenda of those who mobilize the algorithm for their own purposes.

Information

Type
Chapter
Information
Being Human in the Digital World
Interdisciplinary Perspectives
, pp. 116 - 128
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

8 Networked Communities and the Algorithmic Other

To have a whole life, one must have the possibility of publicly shaping and expressing private worlds, dreams, thoughts, desires, of constantly having access to a dialogue between the public and private worlds. How else do we know that we have existed, felt, desired, hated, feared?

In this chapter, I use qualitative research findings to explore how algorithmically driven platforms impact the experience of being human. I take as my starting point Meadian scholar Benhabib’s reminder that, “the subject of reason is a human infant whose body can only be kept alive, whose needs can only be satisfied, and whose self can only develop within the human community into which it is born. The human infant becomes a ‘self,’ a being capable of speech and action, only by learning to interact in a human community” (Benhabib Reference Benhabib1992, 5). From this perspective, living in community and participating in communication with others is a central part of the human experience; it gives shape to Nafisi’s dance between public and private worlds and enables an agential path by which we come to know ourselves and forge deep bonds in community with others.

Certainly, early commentators celebrated the emancipatory potential of new online communities as they first emerged in the 1990s as spaces for both self-expression and community building (Ellis et al. Reference Ellis, Oldridge and Vasconcelos2004). Often designated communities of shared interest rather than communities of shared geography, they were expected to strengthen social cohesion by enabling people to explore their own interests and deepen their connection with others in new and exciting ways (see, e.g. Putnam Reference Putnam2000). Critics, on the other hand, worried that networked technology would further isolate people from each other and weaken community ties (Ellis et al. Reference Ellis, Oldridge and Vasconcelos2004). The advent of social media, those highly commercialized community spaces with all their hype of self-expression, sharing and connection, simply amplified the debate (Haythornthwaite Reference Haythornthwaite, Joinson, McKenna, Postmes and Reips2007).

For my part, I am interested in what happens to the human experience when community increasingly organizes itself algorithmically. What do we know about the ways in which people manage the interaction between self and others in these communities? What kind of language can we use to come to a normative understanding of what it means to be human in these conditions? How do algorithms influence both our interactions and this normative understanding?

To date, platform owners have encouraged the use of the language of control to describe life in the online community, calling upon individuals to make their own choices about withholding or disclosing personal information to others so they can enjoy what in 1984 the German Supreme Court called informational self-determination (Eichenhofer and Gusy Reference Eichenhofer, Gusy, Brkan and Psychogiopoulou2017). From this perspective, the human being interacting with others in community online is conceptualized apart from any relationship with others, and their agency is exercised by a binary control: zero, they withhold and stay separate and apart from others; one, they disclose and enjoy the fruits of publicity. As I have argued elsewhere (Steeves Reference Steeves, Roessler and Mokrosinska2015, Reference Steeves and DiGiacomo2016), this perspective has consistently failed to capture the complicated and constrained interactions described by people living in these environments.

More socially grounded critiques of this understanding of being human online have underscored the anaemic protection that individual control provides, largely by displacing the autonomous individual with a more social understanding of subjectivity (see, e.g. Cohen Reference Cohen2012; Koskela Reference Koskela and Lyon2006; Liu Reference Liu2022; Mackenzie Reference Mackenzie2015). This approach is interesting precisely because it can account for moments of human agency exercised in the context of a variety of resistive behaviours. For example, 11- and 12-year olds often report that they enjoy asking Siri and Alexa nonsensical questions that the machine cannot answer, as a way of asserting their mastery over the technology. Like the Rickroll memeFootnote 1 and the Grown Women Ask Hello Barbie Questions About Feminism videoFootnote 2, this is a playful way for people to deconstruct the ways in which they are inserted into technical systems and collectively resist the social roles they are offered by the platforms they use.

However, as Los (Reference Los and Lyon2006) notes, resistance is a poor substitute for agential action, precisely because current platforms are “intrinsically bound to social, political, and economic interests” (Thatcher et al. Reference Thatcher, O’Sullivan and Mahmoudi2016, 993) that may overpower the resister by co-opting their resistance and repackaging it to fit within the features that serve those interests. In this context, observers too often interpret the networked human as overly determined through the internalized norms of the platform or as restricted to a form of apolitical transgression/resistance similar to the Rickroll meme and other examples. Either way, we are left with critique but no path forward.

The project of being human in the digital world accordingly requires a better set of metaphors (Graham Reference Graham2013), a richer conceptualization that can capture the human experience within performances, identities and interactions shaped by algorithmic nudges. I suggest that Benhabib’s insight that we come to know ourselves and others by living in community is a productive starting point for developing such a lexicon, not least because the Meadian assumptions upon which it is based set the stage to reunite the search for human agency and the embrace of the social (Koopman Reference Koopman and Fairfield2010). It is also a useful way to extend the insights of relational autonomy scholars (Mackenzie Reference Mackenzie, Armstrong, Green and Sangiacomo2019; Roessler Reference Roessler2021) to do what Pridmore and Wang (Reference Pridmore and Wang2018) call for in the context of digital life – to theorize human agency without severing it from our social bonds to others.

To help give this shape, I start my discussion by revisiting some data I collected from young Canadians in 2017Footnote 3 about their experiences on algorithmically driven platforms. These data were first collected to see how young people navigate their online privacy by making decisions about what photos of themselves to post; we reported that they described a complicated negotiation driven by the need to be seen but not to be seen too clearly, given the negative consequences of a failed performance (Johnson et al. Reference Johnson, Steeves, Shade and Foran2017). However, the data also provide an interesting window into how young people make sense of their self-presentation and interactions with others in networked community spaces that are shaped by algorithms.

Accordingly, I conducted a secondary analysis of the data to explore these elements. I start this chapter by reviewing the findings of that analysis, focusing on the ways in which my participants responded to the algorithms that shaped their online experiences by projecting a self made up of a collage of images designed to attract algorithmic approval as evidenced by their ability to trigger positive responses from a highly abstract non-personalized online community. I then use Meadian notions of sociality to offer a theoretical framing that can explain the meaning of self, other and community found in the data. I argue that my participants interacted with the algorithm as if it were another social actor and reflexively examined their own performances from the perspective of the algorithm as a specific form of generalized other. In doing so, they paid less attention to the other people they encountered in online spaces and instead oriented themselves to action by emulating the values and goals of this algorithmic other. Their performances can accordingly be read as a concretization of these values and goals, making the agenda of those who mobilize the algorithm for their own purposes visible and therefore open to critique. I then use Mead’s notion of the social me and the indeterminate I to theorize the limited and constrained moments of agency in the data when my participants attempted – sometimes successfully, sometimes not – to resist the algorithmic logics that shape networked spaces.

8.1 What Self? What Other? What Community?

As noted, in 2017 we conducted qualitative research to get a better sense of young people’s experiences on social media. Our earlier work (Bailey and Steeves Reference Bailey and Steeves2015) suggested that young people rely on a set of social norms to collaboratively manage both their identities and their social relationships in networked spaces and that they are especially concerned about the treatment of the photos they post of themselves and their friends. We wanted to know more about this, so we asked 18 teenagers between 13 and 16 years of age from diverse backgrounds, 4 of whom identified as boys and 14 of whom identified as girls, to keep a diary for 1 week of the photos they took. They then divided the photos into three categories:

  • Those photos they were comfortable sharing with lots of people;

  • Those photos they were comfortable sharing with a few people; and

  • Those photos they were not comfortable sharing with anyone.Footnote 4, Footnote 5

The photos we collected through this process were largely what we expected to see – school events, group shots, food, lots of landscapes. But when we sat down and talked to our participants, the discussion was not at all what we expected. It quickly became clear that the decisions they were making about what photos to share with many people really had very little to do with their personal interests or their friendships. Although they described the networked world as a place where they could connect with their community of family and friends, the decision-making process itself did not focus on what their friends and family would like to see or what they would like to show them of themselves. Instead, it focused on “followers”, an abstract and anonymous audience they assumed was paying attention to a particular platform. Because of this, they positioned themselves less as people exploring their own sense of identity in community and more as apprentice content curators responsible for feeding the right kind of content to that abstract audience.

The right kind of content was determined by a careful read of the algorithmic prompts they received from the platform. Part of this involved them doing the work of the platform (Andrejevic Reference Andrejevic2009); for example, they universally reported that they maintained Snapchat streaks by posting a photo a day, even when it was inconvenient, because it was what the site required of them. Interestingly, they did this most easily by posting a photo of nothing. For example, one participant was frequently awakened by an alert just before midnight to warn her that her streaks were about to end, so she would cover her camera lens with her hand, take a photo of literally nothing and post the photo as required. It was very clear that the posts were not to communicate anything about themselves or to connect with other people, but to satisfy the demands of the algorithmic prompt.

However, the bulk of their choices rested on a careful analysis of what they thought the audience for a particular platform would be interested in seeing. To be clear, this audience was explicitly not made up of friends or other teens online; it was an abstraction that was imbued with their sense of what the algorithm that organized content on the site was looking for. For example, they all agreed that a careful read of the platform and the ways content was algorithmically fed back to them on Instagram indicated that it was an “artsy” space that required “artsy” content. Because of that, if you had an Instagram page, you needed to appear artsy, even if you were not. Moreover, given the availability of Insta themes, it was important to coordinate the posts to be consistently artsy in a “unique” way, even though that uniqueness did not align with your own personal tastes or predilections.

From this perspective, the self-presentations that they offered to the site revealed very little of themselves aside from their fluency in reading the appropriate algorithmic cues. The digital self was accordingly a fabricated collage of images designed to attract algorithmic approval; in their words, “post worthy” content was made up of photos that said something “interesting” not from their own point of view but from the point of view of the abstract audience that would judge how well they had read the algorithmic prompts. One of the girls explained it this way:

Because VSCO is more artsy, for me, like, I know I post my cooler pictures over there. I thought this was a really cool picture [of the cast of the Harry Potter movies], and I thought maybe a lot of people would like it and like to see it, because a lot of people are fans of Harry Potter, obviously.

She then explained that the point was not to share her interest in Harry Potter with people she knew on or offline (in fact, she was not a fan); it was to identify a theme that would appeal to the algorithm so her content would be pushed up the search results page and attract likes from unknown others. Moreover, she felt that her choice was vindicated when two followers added the photo to their collections. In this context, a pre-existing audience had already made its preferences known and the online marketplace then amplified those preferences by algorithmically prioritizing content that conformed to them; accordingly, the most “interesting” self to portray was one that mirrored the preferences of that marketplace independent of whether or not those preferences aligned with the preferences of the poster.

This boundary or gap between their own preferences and the preferences they associated with their online self was intentionally and aggressively enforced. This was best illustrated when one of the participants was explaining why a photo she took of an empty white bookcase against a white wall could not be shared with anyone. She told me that she had originally planned on posting it to her Instagram page (her theme was “white” because monochromatic themes were “artsy”) but then she noticed a small object on the top shelf. She expanded the photo to see that it was an anime figurine. It was there because she was an ardent anime fan. However, she was distraught that she had almost inadvertently posted something that could, if the viewer expanded the photo, reveal her interest online. She eschewed that type of self-exposure because it could be misconstrued by the algorithm and then made public in ways that could open her up to judgement and humiliation. As another participant explained, photos of your actual interests and close relationships aren’t

… something that you throw outside there for the whole world to see. It’s kind of something that stays personally to you … when I have family photos I feel scared of posting them because I care about my family and I don’t want them to feel envied by other people. So, yeah … Cuz I don’t want – cuz I kinda – I really like my family. I really like my brother. I don’t want anyone making fun of my brother.

To avoid these harms, all our participants reported that they collaborated with friends to collectively curate each other’s online presences, paying special care to images. Specifically, no one posted photos of faces, unless they were part of a large group and taken from a distance. Even then, each photo would be perused to see if everyone “looked good” and, before posting, it would be vetted by everyone in the photo to make sure they were comfortable with it going online.

Interestingly, there were two prominent exceptions to these rules. The first occurred when they were publicly showing affection to friends in very specific contexts that were unambiguous to those who could see them. This included overtly breaching the rules on birthdays: publicly posting a “bad” photo of a friend’s face without permission, so long as it was tagged with birthday good wishes, was a way to demonstrate affection and friendship, akin to embarrassing them by decorating their lockers at school with balloons. The second exception involved interacting with branded commercial content. For example, one of the girls had taken a series of shots of herself at Starbucks showing her face with a Starbucks macchiato beside it. She was quite confident that this photo would be well received because Starbucks was a popular brand. Similarly, our participants were confident that photos and positive comments posted on fan sites would be well-received because they were part of the online marketplace.

All other purposes – actually communicating with friends or organizing their schedules, for example – occurred in online spaces, such as texting or instant messaging apps, that were perceived to be more private. But even there, they restricted the bulk of their communications to sharing jokes or memes and reserved their most personal or intimate conversations for face-to-face interactions where they couldn’t be captured and processed in ways that were outside their control.

8.2 Understanding Algorithmic Community

The snapshot in Section 8.1 paints a vivid picture of a digital self that seeks to master the algorithmic cues embedded in networked spaces to self-consciously fabricate a collage of images that will attract approval from an abstracted, highly de-personalized community. From my research participants’ perspective, networked interaction is therefore not simply about the self expressing itself to others, as throughout this process personal preferences are carefully and meticulously hidden. Rather, it is about the construction of an online self that is “unique” in the sense that it is able to replicate the preferences of the online marketplace in particularly successful ways. Success is determined through feedback from an abstracted and anonymous group of others who view and judge the construction but, to attract the gaze of those others, content must first be algorithmically selected to populate the top of search results. To do this well, the preferences, experiences and thoughts that are unique to the poster must be hidden and kept from the algorithmic gaze, and the poster must post content that will both be prioritized by the algorithm and conform to the content cues embedded by the algorithm in the platform. This is a collaborative task; individuals carefully parse what online self they choose to present but also rely on friends and family to co-curate the image of the self, by helping hide the offline self from the algorithmic gaze and by posting counter content to repair any reputational harm if the online self fails to resonate with the preferences of the online marketplace (see also Bailey and Steeves Reference Bailey and Steeves2015).

To better understand these emerging experiences of the online self, other and community, I suggest we revisit Mead’s understanding of self and community as co-constructed through inter-subjective dialogue. For Mead, an essential part of being human is the ability to anticipate how the other will respond to the self’s linguistic gestures, to see ourselves through the other’s eyes. This enables us to put ourselves in the position of the other and reflexively examine ourselves from the perspective of the community as a whole. He calls this community perspective the “generalized other” (Mead Reference Mead1934; see also Aboulafia Reference Aboulafia2016; Prus Reference Prus, Herman and Reynolds1994).

Martin (Reference Martin2005) argues that this ability to take the perspective of the other is a useful way to understand community because it calls upon us to pay attention to “our common existence as interpretive beings within intersubjective contexts” (232). Certainly, my participants can be understood as interpreters of the social cues they found embedded in networked spaces, exemplifying Martin’s understanding of perspective taking as “an orientation to an environment that is associated with acting within that environment” (234). What is new here is that my participants described a process in which they gave less attention to their interactions with other people in that environment and instead oriented themselves to action by carefully identifying and emulating the perspective of the algorithm that shaped the environment itself.

This was often an explicit process. When they explained how they were trying to figure out the algorithm’s preferences and needs, they were not merely seeking to reach through the algorithm to the social actors behind it to interpret the expectations of the human platform owners or even the human audience that would see their content. Rather, by carefully reading the technical cues to determine what kind of content was preferred by the platform and offering up a fabricated collage of images designed to attract its approval, they both talked about and interacted with the algorithm as if it were another subject.

This is a kind of reverse Turing Test. They were not fooled into thinking the algorithm was another human. Instead, they injected the algorithm with human characteristics, seeking to understand what was required of them by identifying the algorithm’s preferences and interacting with it as if it were another subject. They did this both directly (by feeding the platform information and watching to see what response was communicated back to them) and indirectly through the “followers” who acted as the algorithm’s proxies. Moreover, the importance they accorded to these algorithmic preferences was demonstrated by the kinds of identities my participants chose to perform in response to this interaction – such as Harry Potter Fan and Starbucks Consumer – even when these identities did not align with the selves and community they co-constructed offline and on private apps with friends and family.

Daniel (Reference Daniel2016) provides an entry point into exploring this gap between online and offline selves when he rejects the notion of the unitary generalized other and posits a multiplicity of generalized others that can better take into account experiences of social actors who are located in a multiplicity of communities. Certainly, his insight that “the self is constituted by its participation in multiple communities but responds to them creatively by enduring the moral perplexity of competing communal claims” (92) describes the difficulties my participants talked about as they sought to be responsive to the multiple perspectives of the various social actors in their lives, including family, friends, schoolmates and algorithms. But reconceiving these various audiences as a “plurality of generalized others” (Martin Reference Martin2005, 236), each of which reflects a self based on a specific set of expectations and aspirations shared by those inhabiting a particular community space (Daniel Reference Daniel2016, 99), makes it possible to conceptualize – and analyze – the algorithm as the algorithmic other, with its own commercially-driven values and goals, that shapes selves and interactions in networked spaces.

To date, the most comprehensive critique of the commercial values and goals that shape the online environment has been made by Zuboff (Reference Zuboff2019). She argues that algorithms act as a form of Big Brother or, in her words, “a Big Other that encodes the ‘otherized’ viewpoint of radical behaviorism as a pervasive presence” (20). From this perspective, the problem rests in the fact that the algorithmic other does not operate to reflect the self back to the human observer so the human can see its performances as an object, but instead quantifies the fruits of social interaction in order to (re)define the self as an object that can be nudged, manipulated and controlled (Lanzing Reference Lanzing2019; McQuillan Reference McQuillan2016; Steeves Reference Steeves2020). In this way, the algorithmic other serves to:

automate us … [and] finally strips away the illusion that the networked form has some kind of indigenous moral content – that being “connected” is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Instead, digital connection is now a brazen means to others’ commercial ends.

However, Zuboff’s critique is dissatisfying as it gives us no way to talk about agency: if we are fully automated, then we have been fully instrumentalized. It also fails to capture the rich social-interactive context in which my research participants sought to understand and respond to the algorithms that shape their public networked identities. Once again, I suggest that Mead can help us because he lets us unpack the instrumentalizing logic of the nudge without giving up on agency altogether.

Certainly, my participants’ experiences suggest that the kinds of identities that we can inhabit in networked spaces are constrained to those that conform to the commercial imperatives of the online ecology. However, the notion that a particular community constrains the kinds of identities we are able to experiment with is not new. As Daniel (Reference Daniel2016) notes:

It is crucial to appreciate that Mead’s generalized other is aggressive and intrusive, not passively composed by the self … This is clearer in the state of social participation, which requires the self to organize its actions so as to fit within a pattern of responsive relations whose expectations and aspirations precede this particular self’s participation … The generalized other should be understood as [this] pattern of responsive relations, which is oriented toward particular values and goals.

(100)

From this perspective, the types of identities that we see performed online in response to the algorithmic other concretize the values and goals embedded in online spaces by platform owners who mobilize algorithms for their own profit; and, by making those values visible, they open them up to debate. This makes the algorithm a key point of critique because it is a social agent that operates to shape and instrumentalize human interactions for the purposes of the people who mobilize it. From this perspective, to solve the kinds of polarization, harassment and misinformation we see in the networked community we must start by analyzing how algorithms create a fruitful environment for those kinds of outcomes. Unpacking how this works is the first step in holding those who use algorithms for their own profit to public account.

The sociality inherent in my participants’ interactions with the algorithmic other also lets us account for those small moments of agency reflected in the data. Mead posits that the self interacts with the generalized other in two capacities. The first is the social me that is performed for the generalized other and reflected back to the self so the self can gauge the success of its own performance. As noted in Section 8.1, the intrusiveness of the algorithmic other constrains the social me that can be performed in networked spaces. This is exemplified by my participants’ concern that their networked selves – artsy consumers of branded products – conform to the expectations of the algorithmic other even when they can’t draw a stick figure or don’t like coffee. However, the second capacity of the self is the I, the self that observes the social me as an object to itself and then decides what to project next.

Mead accordingly helps us break out of algorithmic determination by anchoring agency in the indeterminacy of the I as a future potentiality. This indeterminacy is constrained because it is concretized as the social me as soon as the I acts. But its emergent character reasserts the possibility of change and growth precisely because it can only act in the future. By situating action in a future tense of possibility, we retain the ability to resist, to choose something different, to be unpredictable, to know things about ourselves that have not yet come into being. In this sense, the algorithm can constrain us, but it cannot fully determine us because we continue to emerge.

Certainly, my research participants sought to exercise agency over their online interactions by revealing and hiding, making choices as part of an explicitly conscious process of seeing the objective self reflected back to them. They also wrested online space away from the algorithm on occasion. Birthday photos, for example, were consciously posted in order to break the algorithmic rules and to connect not with the abstract audience but with the humans in their lived social world, a social world which both interpolates with and extends beyond networked spaces. This demonstrates both a familiarity with and an ability to pull away from the algorithmic other in favour of the generalized other they experience in real world community.

8.3 Conclusion

I argue that my participants’ experiences demonstrate the paucity of identities available to networked humans who interact on sites that are shaped by the instrumental goals of profit and control. But those same experiences also underscore the rich sociality with which humans approach algorithmically driven ecologies, shaping their own interactions with the environment by injecting social meaning into the algorithm through their reading of the algorithmic other.

Certainly, the algorithmic positioning of human as object for its own instrumental purposes rather than for the social purposes of the self leaves us uneasy. Although we may feel reduced to an online self that is “compactified” into “a consumable package” and wonder if we can “know what it means to exist as something unsellable” (Fisher-Quann Reference Fisher-Quann2022), the point is we still wonder. Once again, agency exists as a potentiality in the moment of our own perusal of the self as object, in spite of – or perhaps because of – our interactions with the aggressive and intrusive nature of all generalized others (Daniel Reference Daniel2016).

Moreover, by conceiving of the algorithm as a social actor, we can extend the moment of human agency and bring the values and goals embedded in the algorithm out of the background and into the foreground of social interaction. From this perspective we can open up the algorithmic black box and read its instrumental intentions through the performances it reflects back to us because we recognize and interact with the algorithmic other as other. From this perspective, the algorithm only “masquerades as uncontested, consensus reasons, grounds, and warrants when they are anything but” (Martin Reference Martin2005, 251). Acknowledging the algorithm as an inherent part of online sociality helps us begin the hard task of collectively confronting the politics inherent in the algorithmic machine (McQuillan Reference McQuillan2016, 2).

Mead’s Carus Lecture in 1930 is prophetic in this regard. He said:

It seems to me that the extreme mathematization of recent science in which the reality of motion is reduced to equations in which change disappears in an identity, and in which space and time disappear in a four-dimensional continuum of indistinguishable events which is neither space nor time is a reflection of the treatment of time as passage without becoming.

Hildebrandt and Backhouse (Reference Hildebrandt and Backhouse2005) make the same point when they argue that the data that algorithms use to sort us are a representation, constructed from a particular point of view, of a messy, complicated, nuanced and undetermined person. They warn us that, if our discourse confuses the representation of an individual with the lived sense of self, we will fail to account for the importance of agency in the human experience. We will also be unable to unmask the values and goals of those humans who mobilize algorithms in the networked world for their own purposes.

Footnotes

3 The data was originally collected as part of the eQuality Project, a multi-year partnership of researchers, educators, policymakers, youth workers and youth funded by the Social Sciences and Humanities Council of Canada. For more information, see equalityproject.ca. The moment in time is also instructive, as it marks the shift away from early reports of enthusiasm for online self-exploration and connection (Environics 2000; Steeves Reference Steeves2005) to a more cautious view of online community as fraught with reputational risks (Bailey and Steeves Reference Bailey and Steeves2015; Steeves Reference Steeves2012) and therefore something that is safer to watch than to participate in (Steeves et al. Reference Steeves, McAleese and Brisson-Boivin2020).

4 We also suggested an alternative in case they were uncomfortable sharing a particular photo with us. In that case, they could submit a description of the photo instead. None of the participants opted for this alternative.

5 After collecting the photos, we conducted individual interviews between 60 and 90 minutes in length, using a semi-structured interview guide to explore their photo choices. Interviews were transcribed and subjected to a thematic qualitative analysis. The research protocols were approved by the research ethics boards at the University of Ottawa, the University of Toronto, Western University and George Mason University. For the original report, see Johnson et al. (Reference Johnson, Steeves, Shade and Foran2017).

References

Aboulafia, Mitchell. “George Herbert Mead and the Unity of the Self.” European Journal of Pragmatism and American Philosophy VIII, no. 1 (2016). https://journals.openedition.org/ejpap/465.CrossRefGoogle Scholar
Andrejevic, Mark. iSpy: Surveillance and Power in the Interactive Era. Lawrence: University Press of Kansas, 2009.Google Scholar
Bailey, Jane, and Steeves, Valerie, eds. eGirls, eCitizens. Ottawa: University of Ottawa Press, 2015.Google Scholar
Benhabib, Seyla. Situating the Self: Gender, Community, and Postmodernism in Contemporary Ethics. New York: Routledge, 1992.Google Scholar
Cohen, Julie E. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. New Haven, CT: Yale University Press, 2012.Google Scholar
Daniel, Joshua. “Richard Niebuhr’s Reading of George Herbert Mead: Correcting, Completing, and Looking Ahead.” Journal of Religious Ethics 44, no. 1 (2016): 92115.CrossRefGoogle Scholar
Eichenhofer, Johannes, and Gusy, Christoph. “Courts, Privacy and Data Protection in Germany: Informational Self-determination in the Digital Environment.” In Courts, Privacy and Data Protection in the Digital Environment, edited by Brkan, Maja and Psychogiopoulou, Evangelia, 101119. Cheltenham: Edward Edgar Publishing, 2017.Google Scholar
Ellis, David, Oldridge, Rachel, and Vasconcelos, Ana. “Community and Virtual Community.” Annual Review of Information Science and Technology 38, no. 1 (2004): 145186.CrossRefGoogle Scholar
Environics. Young Canadians in a Wired World, Phase 1: Focus Groups with Parents and Children. Ottawa: MediaSmarts, 2000.Google Scholar
Fisher-Quann, Rayne. “Standing on the Shoulders of Complex Female Characters: Am I in my Fleabag Era or Is my Fleabag Era in Me?” Internet Princess, February 6, 2022. https://internetprincess.substack.com/p/standing-on-the-shoulders-of-complex.Google Scholar
Graham, Mark. “Geography/Internet: Ethereal Alternate Dimensions of Cyberspace of Grounded Augmented Realities?” The Geographic Journal 179, no. 2 (2013): 177182.CrossRefGoogle Scholar
Haythornthwaite, Caroline. “Social Networks and Online Community.” In Oxford Handbook of Internet Psychology, edited by Joinson, Adam, McKenna, Katelyn, Postmes, Tom, and Reips, Ulf-Dietrich, 121134. New York: Oxford University Press, 2007.Google Scholar
Hildebrandt, Mireille, and Backhouse, James, eds. D7.2: Descriptive Analysis and Inventory of Profiling Practices. European Union: FIDIS Network of Excellence, 2005.Google Scholar
Johnson, Matthew, Steeves, Valerie, Shade, Leslie, and Foran, Grace. To Share or Not to Share: How Teens Make Privacy Decisions about Photos on Social Media. Ottawa: MediaSmarts, 2017.Google ScholarPubMed
Koopman, Colin. “The History and Critique of Modernity: Dewey with Foucault against Weber.” In John Dewey and Continental Philosophy, edited by Fairfield, Paul, 194218. Carbondale: Southern Illinois University Press, 2010.Google Scholar
Koskela, Hille. “The Other Side of Surveillance: Webcams, Power and Agency.” In Theorizing Surveillance: The Panopticon and Beyond, edited by Lyon, David, 163181. London: Routledge, 2006.Google Scholar
Lanzing, Marjolein. “‘Strongly Recommended’: Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies.” Philosophy & Technology 32 (2019): 549568.CrossRefGoogle Scholar
Liu, Chen. “Imag(in)ing Place: Reframing Photography Practices and Affective Social Media Platforms.” Geoforum 129 (2022): 172180.CrossRefGoogle Scholar
Los, Maria. “Looking into the Future: Surveillance, Globalization and the Totalitarian Potential.” In Theorizing Surveillance: The Panopticon and Beyond, edited by Lyon, David, 6994. London: Routledge, 2006.Google Scholar
Mackenzie, Adrian. “The Production of Prediction: What Does Machine Learning Want?” European Journal of Cultural Studies 18, no. 4–5 (2015): 429445.CrossRefGoogle Scholar
Mackenzie, Catriona. “Relational Autonomy: State of the Art Debate.” In Spinoza and Relational Autonomy: Being with Others, edited by Armstrong, Aurelia, Green, Keith, and Sangiacomo, Andrea, 1032. Edinburgh: Edinburgh University Press, 2019.CrossRefGoogle Scholar
Martin, Jack. “Perspectival Selves in Interaction with Others: Re-reading G.H. Mead’s Social Psychology.” Journal for the Theory of Social Behaviour 35, no. 3 (2005): 231253.CrossRefGoogle Scholar
McQuillan, Dan. “Algorithmic Paranoia and the Convivial Alternative.” Big Data & Society 3, no. 2 (2016): 112.CrossRefGoogle Scholar
Mead, George Herbert. Mind, Self, and Society from the Standpoint of a Social Behaviorist. Chicago: University of Chicago Press, 1934.Google Scholar
Mead, George Herbert. The Philosophy of the Present, edited by Murphy, Arthur E.. LaSalle, IL: Open Court, 1932.Google Scholar
Nafisi, Azar. Reading Lolita in Tehran. New York: Random House, 2003.Google Scholar
Pridmore, Jason, and Wang, Yijing. “Prompting Spiritual Practices through Christian Faith Applications: Self-Paternalism and the Surveillance of the Soul.” Surveillance & Society 16, no. 4 (2018): 502516.CrossRefGoogle Scholar
Prus, Robert. “Generic Social Processes and the Study of Human Experiences.” In Symbolic Interaction: An Introduction to Social Psychology, edited by Herman, Nancy J. and Reynolds, Larry T., 436458. Maryland: Rowman & Littlefield, 1994.Google Scholar
Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000.Google Scholar
Roessler, Beate. Autonomy: An Essay on the Life Well-Lived. Cambridge: Polity Press, 2021.Google Scholar
Steeves, Valerie. “A Dialogic Analysis of Hello Barbie’s Conversations with Children.” Big Data & Society 7, no. 1 (2020): 112.CrossRefGoogle Scholar
Steeves, Valerie. “Now You See Me: Privacy, Technology and Autonomy in the Digital Age.” In Current Issues and Controversies in Human Rights, edited by DiGiacomo, Gordon, 461482. Toronto: University of Toronto Press, 2016.CrossRefGoogle Scholar
Steeves, Valerie. “Privacy, Sociality and the Failure of Regulation: Lessons Learned from Young Canadians’ Online Experiences.” In Social Dimensions of Privacy: Interdisciplinary Perspectives, edited by Roessler, Beate and Mokrosinska, Dorota, 244260. Cambridge: Cambridge University Press, 2015.CrossRefGoogle Scholar
Steeves, Valerie. Young Canadians in a Wired World, Phase II: Trends and Recommendations. Ottawa: MediaSmarts, 2005.Google Scholar
Steeves, Valerie. Young Canadians in a Wired World, Phase III: Talking to Youth and Parents about Life Online. Ottawa: MediaSmarts, 2012.Google Scholar
Steeves, Valerie, McAleese, Samantha, and Brisson-Boivin, Kara. Young Canadians in a Wireless World, Phase IV: Talking to Youth and Parents about Online Resiliency. Ottawa: MediaSmarts, 2020.Google Scholar
Thatcher, Jim, O’Sullivan, David, and Mahmoudi, Dillon. “Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data.” Environment and Planning D: Society and Space 34, no. 6 (2016): 9901006.CrossRefGoogle Scholar
Zuboff, Shoshana. “Surveillance Capitalism and the Challenge of Collective Action.” New Labor Forum 28, no. 1 (2019): 1029.CrossRefGoogle Scholar

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×