Hostname: page-component-7857688df4-8d8b9 Total loading time: 0 Render date: 2025-11-16T19:04:18.281Z Has data issue: false hasContentIssue false

Doing things otherwise: Liveability as an epistemic reorientation to interdisciplinary work

Published online by Cambridge University Press:  23 October 2025

Nishant Shah*
Affiliation:
School of Journalism & Communication, The Chinese University of Hong Kong, Hong Kong SAR, China
Rights & Permissions [Opens in a new window]

Abstract

In this paper, I critically rethink how interdisciplinary approaches to artificial intelligence (AI) are framed, particularly in relation to technology-facilitated gender- and sexuality-based violence (TFGSBV). While collaboration across disciplines is often positioned as a solution to AI-related harms, I argue that these approaches can reproduce the very exclusions they aim to address. Drawing on my work with the Feminist Internet Research Network and dialogues with researchers and activists across the Majority Worlds, I propose liveability – not only as a principle, but as a practice – for interdisciplinary work.

Liveability refers to the conditions that allow individuals and communities to thrive – emotionally, politically and socially – even amid structural violence and technological harm. I treat interdisciplinarity not as a goal in itself, but as an ethical and relational process grounded in care, accountability and situated knowledge. Through feminist and queer perspectives, I examine how AI harms are often abstracted into technical problems, and I advocate for slower, more grounded practices that centre lived experience. Through lessons from Feminist Internet Research Network’s multistakeholder collaborations, I show how introducing liveability into interdisciplinary and multistakeholder work – particularly around TFGSBV – enables alternative, more inclusive forms of collaboration.

Liveability becomes a compass for working across difference, shifting interdisciplinary AI research towards justice, community and collective transformation.

Information

Type
Reflection
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-ShareAlike licence (http://creativecommons.org/licenses/by/4.0), which permits re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

We were assembled in Taipei, at Rightscon 2025, trying to build an expanded scope of technology-facilitated gender- and sexuality-based violence (TFGSBV) with researchers and activists from across the Majority Worlds. In the day-long co-creation workshop, conversations invariably veered towards artificial intelligence (AI) and the ways in which it is shaping and impacting the work of those who seek to minimise harms and protect those disproportionately affected by the rise of AI technologies. While the spectrum of harms was expansive and varied, discussions kept returning to an oft-repeated trope: we do not know enough about AI. The most common solution offered – arrived at through different geopolitical and discursive journeys – was interdisciplinarity and multistakeholderism. Collaboration across technical, social, ethical and legal domains to mitigate harms and ensure accountability seemed to be the preferred approach (Venkatasubramanian et al., Reference Venkatasubramanian, Bliss, Nissenbaum and Moses2020). Yet each time interdisciplinarity and multistakeholderism were proposed, an uneasy silence followed, recalling the histories of exclusion, hierarchies of legitimacy and disciplinary boundaries (Zeller and Dwyer, Reference Zeller and Dwyer2022) that often obscure the very injustices they seek to address.

It reminded me of another incident, almost a year before Rightscon, when I was in Chiangmai to host the convening of the third cohort of the Feminist Internet Research Network (FIRN). We were working with new fellows examining TFGSBV in different parts of the world, again focusing on AI technologies. In one conversation about producing more academic knowledge and interdisciplinary research around AI, there was an explosive outburst from a cohort member from Latin America – a sex worker turned organiser for sex rights and protection against exploitation, automation and replacement. “My body is not disciplined. I do not want you to study me anymore,” she proclaimed. What followed was a scathing critique of how interdisciplinary knowledge producers in the region had, while advocating for diverse discourses, repeatedly trapped sex workers as unknowing and unknowable subjects in research on AI, human trafficking and sex work.

Both these disruptions acknowledged the value and need for polyvocal narratives and epistemes, while remaining hesitant and cautious about the institutionalised mechanisms that operationalise them. I take them as a beginning point to think about interdisciplinarity in AI research and practice through the lens of liveability, particularly as it emerges from experiences of TFGSBV discourse and practice. Drawing on feminist and queer critiques of disciplinarity, I ask: What if we stopped centring interdisciplinarity as the solution and instead focused on liveability as the starting point? What if interdisciplinary practices emerged from a commitment to liveable futures rather than an obligation to bridge predefined disciplinary divides?

Building on the immediacy and critical urgency of addressing AI and TFGSBV, how might we reorient our conversations around interdisciplinarity from methodological anxiety to epistemological care? I do not propose liveability as a supplement to interdisciplinarity, but as a method for transforming how knowledge is co-created across difference. This framing recognises that some interdisciplinary approaches – particularly those informed by feminist, decolonial and critical traditions – already share these commitments, yet insists that liveability offers a distinctive organising principle. It shifts the stakes of interdisciplinarity by centring lived experience, political urgency and situated knowledges, and by asking how we make our interdisciplinary work liveable.

1. AI, interdisciplinarity and the need to know more

Calls for interdisciplinarity in AI emerge from both a recognition of complexity and an anxiety about the limitations of disciplinary silos. No single discipline can capture the intricacies of AI-facilitated gender- and sexuality-based violence, given the distributed nature of attacks, the increasing availability of weaponisable technological agents and bots, and the global reach of cascading memetic practices that consolidate in the digital manosphere, manifesting as incel and Men’s Rights Activist movements. The nature of these violences is simultaneously individual, collective and institutionalised.

However, as Andrew Barry, Georgina Born and Gina Weszkalnys (Barry et al., Reference Barry, Born and Weszkalnys2008) argue, interdisciplinary initiatives often mask power differentials that shape knowledge production. They write, “Interdisciplinarity can act as a mode of governance,” noting how funding structures and institutional mandates may privilege certain disciplines – especially STEM (Science Technology Engineering and Medicine) fields – over others. This leads to what they call the “reproduction of hierarchy under the guise of dialogue.” This insight sits alongside critiques from Helga Nowotny (Reference Nowotny2003), Elizabeth Losh and Jacqueline Wernimont (Reference Losh and Wernimont2018) and Geoffrey Bowker (Reference Bowker2005), who together highlight how well-intentioned interdisciplinarity can reify disciplinary distinctions, perform inclusion without structural change and embed the values and exclusions of its designers. These patterns resonate with conversations at Rightscon, where we recognised that interdisciplinary work around “ethics,” “responsibility” or “governance” of AI often becomes a way of managing dissent – offering moral legitimacy without necessarily changing how decisions are made.

As Kusters et al. (Reference Kusters, Misevic, Berry, Cully, Le Cunff, Dandoy, Díaz-Rodríguez, Ficher, Grizou, Othmani, Palpanas and Faure2020) note, frameworks for interdisciplinary AI research often reify disciplinary distinctions even as they attempt to traverse them. Similarly, Abbonato et al. (Reference Abbonato, Bianchini, Gargiulo and Venturini2023) show that interdisciplinarity in pandemic-related AI research often becomes a rhetorical gesture rather than a methodological commitment. Bowker’s reminder that “every database is a frozen ideology” applies here: every dataset, every training protocol, every benchmark test encodes a worldview – one that too often excludes gendered, sexual, racial or Indigenous perspectives.

The refusal in that FIRN meeting was calling out the language of integration and collaboration that masks a deeper unwillingness to confront how certain knowledges are systematically devalued, marginalised or rendered unintelligible. Researchers from the Majority Worlds, disabled scholars, queer organisers and grassroots communities are frequently invited into “collaboration” in ways that tokenise their expertise or extract their insights without structural reciprocity.

Geoffrey Bowker’s (Reference Bowker, Anand, Gupta and Appel2018) critique of sustainable knowledge infrastructures reminds us that infrastructures are never neutral; they reflect the values and exclusions of their designers. In his earlier work, Bowker (Reference Bowker2005) warned that “every database is a frozen ideology.” In the context of AI, we might say that every dataset, every training protocol and every benchmark test encodes a worldview – one that too often excludes gendered, sexual, racial or Indigenous perspectives.

We ended the workshop at Rightscon wondering who will do the work of interdisciplinarity – a process requiring resources and instruments of mobilisation ranging from funding applications and strategy documents to academic publishing and capacity development (Barry and Born, Reference Barry and Born2013). The labour of making interdisciplinarity possible – translation, mediation, relationship-building – is often feminised, invisibilised or deemed secondary to the production of legible outcomes. The infrastructural demands of interdisciplinary work – shared vocabularies, ethical frameworks and funding mechanisms – require time, trust and critical reflection on personal and collective privilege and on the processes of othering.

Helga Nowotny (Reference Nowotny2003) argues for reflexivity in interdisciplinary work, introducing the idea of “socially robust knowledge” – knowledge that is accountable to public concerns, open to critique and aware of its own limitations. Without attention to these relational dynamics, interdisciplinarity risks becoming a performance of inclusion rather than a practice of transformation – an extinction impulse. These critiques are echoed in the work of feminist epistemologists such as Sandra Harding (Reference Harding1991) and Lorraine Code (Reference Code1995), whose concepts of “strong objectivity” and “epistemic responsibility” insist that knowing well is as much about relationality, care and responsiveness as it is about accuracy. From this perspective, interdisciplinarity cannot be an end in itself but must remain a question of how, and with whom, we come to know.

2. AI, harms and the narratives we choose

Technologies are part imaginaries (Jasanoff and Kim, Reference Jasanoff and Kim2015), part infrastructure (Chun, Reference Chun2016) and part narratives (Shah, Reference Shah2018). Within queer and feminist interventions in digital technologies, there is a clear understanding that technologies and our relationships with them are narrative realities. We become the stories we choose to tell and these stories shape how we approach and know technologies. Approaching AI technologies as stories opens up space for polyvocal narratives that attend to different aspects of AI.

In the context of TFGSBV, the narratives of harms are strong, though not the default. Given the pervasive harms perpetuated by AI systems – including biased algorithmic targeting, predictive policing tools, racially discriminatory facial recognition technologies (Benjamin, Reference Benjamin2019; Buolamwini and Gebru, Reference Buolamwini and Gebru2018; Eubanks, Reference Eubanks2018) – it is unsurprising that harms dominate the discourse. Centring harms offers a critical opening for the rapid proliferation of interdisciplinary research agendas in policy hubs, think tanks and ethics labs across academia, corporations, governments and civil society. Yet focusing on harms also demands we examine who defines harm, whose voices shape the solutions and whose knowledge is made actionable.

Harms are both materially evident and experientially inchoate, making them a persistent lens for examining interdisciplinarity. Too often, interdisciplinary AI solutions abstract harms into categories such as “bias” or “risk,” detaching them from the histories of colonialism, racial capitalism and patriarchy in which they are embedded. As Sasha Costanza-Chock (Reference Costanza-Chock2020) notes in their design justice framework, collaboration outside technical domains is frequently invited to “inform” rather than “transform,” with those impacted by harm rarely granted epistemic authority equal to that of researchers. Gayatri Chakravorty Spivak (Reference Chakravorty Spivak2012) calls this “sanctioned ignorance” – the structural incapacity of dominant institutions to recognise the epistemic contributions of the subaltern.

AI harms, particularly TFGSBV, resist the managerial turn that focuses on technical optimisation and perpetuates the violence of abstraction. They also resist the disembodied universalism that Donna Haraway (Reference Haraway1988) critiqued as the “god trick” of objectivity – the state of seeing without being seen. In the case of black-boxed AI technologies, the invisibility of processes demands that we attend to the specific, embodied and affective dimensions of how harm is produced and lived. This shift moves us, as Haraway (Reference Haraway2016) urges, “from solutionism to care, from modelling to listening.” It also aligns with Sheila Jasanoff’s (Reference Jasanoff2007) call for “technologies of humility” as a counter to the hubris of contemporary AI governance. For Jasanoff, humility means recognising the limits of knowledge, acknowledging the contested nature of expertise and creating space for public reasoning and dissent.

For research on TFGSBV and AI, this approach involves deferring to those whose lives are most affected, rather than defaulting to institutional authority. It is, in Sara Ahmed’s (Reference Ahmed2023) terms, an attempt to make knowledge liveable by making institutions uncomfortable.

3. Liveability as an epistemological reorientation

Unsettling the interdisciplinarity-in-AI narrative by bringing in the question of harms also makes space to anchor it through the concept of liveability, as proposed by Audre Lorde (Reference Lorde1984). Lorde’s formulation shifts the terrain from survival to thriving, in our ongoing struggles for dignity, joy and recognition in the face of systemic violence. In this context, liveability refers to the conditions that allow marginalised communities to flourish by reorienting towards justice, care and sustainability – an orientation that can guide interdisciplinary work without making interdisciplinarity an end unto itself. By bringing liveability to the centre, we move from seeing interdisciplinarity as a procedural fix to understanding it as a relational, ethical commitment rooted in situated demands.

Liveability challenges the extractive tendencies of AI technologies and interdisciplinary AI projects that treat social justice as a management or public relations category. Instead, it demands accountability from interdisciplinary AI work itself. Parera (Reference Parera2022) expands on this in the context of feminist AI, where the goal is not simply to make AI less harmful but to reimagine its very foundation. Similarly, Sharma (Reference Sharma2020), in her “Manifesto for the Broken Machine,” shows how interdisciplinary frameworks that study conditions of breakdown often assign responsibility to individuals – “human error” comes to mind – rather than interrogating structural conditions. Both draw from a long history of feminist care work that centres maintenance, repair and collective flourishing (Puig de la Bellacasa, Reference Puig de la Bellacasa2017; Tronto, Reference Tronto1993).

Liveability offers an epistemic reorientation for doing interdisciplinary work on AI technologies. It decentres abstract problem-solving or decision-making and instead asks what it takes to sustain life – not only biologically, but politically, emotionally and spiritually. It shifts emphasis from impact to relation, from evidence to endurance. Rather than treating AI-related harms as discrete problems to be solved, it begins with the lived experiences of communities who must be heard. It privileges processes that are slow, relational and grounded in trust, resisting the pace and scale of AI technologies that often dictate the rush of interdisciplinary research. This shift is echoed in Lauren Berlant’s (Reference Berlant2011) concept of “slow death” – the incremental, exhausting effects of systemic harm – and in the political imperative to interrupt these conditions with practices that sustain life otherwise.

Jack Halberstam’s (Reference Halberstam2011) critique of the disciplinary drive to produce legibility and coherence, articulated in The Queer Art of Failure, positions traditional disciplines as invested in “positivity, reform and accommodation” and thus rooted in negation, refusal and failure. Halberstam calls for a disidentification with the conventions of academic labour and liveability is a framework that facilitates this move. It is an intersectional framework drawing from queer, trans and crip theory, which opposes survivalism that demands resilience without structural change. Jasbir Puar’s (Reference Puar2017) critique of “the right to maim” argues that the state tolerates certain life forms only to the extent that they remain compliant. In contrast, liveability is about thriving on one’s own terms.

It centres situated demands and enables a resistant reading of emerging technologies and their inscription on our bodies. Alison Kafer’s (Reference Kafer2013) intervention in disability justice, as a politics of coalition and futurity, similarly rejects normative timelines of productivity and efficiency, calling for new frameworks to examine our engagements with technological violence. The operational condition of liveability is community and coalition – both requiring multiple forms of resources and labour. As Chandra Talpade Mohanty (Reference Mohanty2003) reminds us, the work of building solidarity across difference is messy, unfinished and deeply necessary. It cannot be reduced to a methodology or blueprint upon which interdisciplinarity can be mounted. It asks us to attend to what Alexis Pauline Gumbs (Reference Gumbs2018), in a poetics of care, calls the “fugitive science” of survival – informal, ancestral and improvisatory practices that refuse to be codified into extractive datasets curated by uncaring algorithms.

I offer liveability as a method for doing interdisciplinarity in AI that redefines what counts as knowledge and what is valued. It means valuing dialogue that does not resolve, community partnerships that outlast immediate grant cycles, and critiques that may not be legible within existing metrics. It builds on Maya Indira Ganesh’s (Reference Ganesh2025) provocation that if we evaluate the ethics of automated machines through the rubrics of “machine ethics,” we only work with categories legible to those machines (and the systems that support them), leaving no space for external critique. Liveability, when applied to interdisciplinary AI research, is less about bridging disciplines and more about creating spaces where other forms of knowing can take root. It reframes knowledge production as relationship-building across time and asymmetry – an ongoing, ethical commitment rather than a deliverable with a deadline.

4. Doing things otherwise: lessons learned from the feminist internet research

The Feminist Internet Research Network (FIRN), housed at the Association for Progressive Communications, began in 2018 to develop feminist research methods for studying the Internet and related technologies. Aligning itself with the questions of TFGSBV, it invites partners from Majority World countries to engage in research that centres the experiences of women, LGBTQ + people, and other intersectionally marginalised communities. This work responds to harms with an orientation towards liveability, recognising that feminist approaches are rarely integrated into mainstream interdisciplinary frameworks, and that feminist principles of the Internet have largely been operationalised by feminist and queer groups outside formal institutional settings.

From the outset, FIRN sought to partner with diverse organisations and movements to develop a framework of liveability that moved beyond the disciplinary blind spots of research that has historically sidelined or ignored feminist interventions in technology and rights-based policymaking. This meant identifying epistemic exclusions and resisting classification systems that position the knowledge of marginalised communities as marginal. Through its partnerships, FIRN has prototyped and piloted approaches that interrupt these systems, creating space for ways of knowing that might otherwise be dismissed as anecdotal, emotional, or unscientific.

In each collaboration, liveability and an attention to harm have been the guiding forces for conducting interdisciplinary research on digital technologies. These projects have not simply applied a gender lens to technology; they have redefined what counts as violence, evidence and response. Rather than seeking a universal framework, the research has foregrounded contextual specificity and community-led solutions (McLean, Reference McLean2022). The network’s coalitional structure has made it possible to hold complexity, accommodate divergent methods and value forms of knowledge production that do not fit neatly into dominant disciplinary categories.

I have worked as a Peer Advisor and a Knowledge Partner with FIRN since its inception, following the feminist ethics of not speaking on behalf of partners who have all produced provocative, expansive and critical work documented in academic and creative forms. In place of narrating individual projects or outcomes, I share three touchstones that have shaped my understanding of how interdisciplinary research on AI can be done otherwise. These lessons show how liveability is not a precondition for coalition work, but its by-product – emerging through sustained, situated and relational practices across difference.

4.1 Reimagining datafication

One of the most pervasive components in research on AI-driven gender and sexual violence is the question of data. Current interdisciplinary practices take datafication as a given, investing energy in examining the integrity, diversity, veracity and interpretation of datasets. Questions of gendered and sexual harms are often translated into a demand for more, and more diverse, data – continuing and naturalising the extractive mode of datafication that underpins AI systems. Even participatory or community-driven approaches often centre and accept datafication as inevitable, focusing on usage and integrity rather than interrogating the premise itself.

FIRN partners have shown that data may be the default, but it is not the only form of informational unit in AI systems. A partner working on the mental health and well-being of queer communities under persecution introduced the idea of maps of aspiration: for those unable to come out, walk out or leave their circumstances, these maps traced part-digital, part-physical pathways towards small pockets of kinship. They marked moments where one could step into spaces of safety, joy and desire. These pathways were not tangible data points, could not be verified and did not build coordinates on conventional maps – yet they became powerful modes of aspiring, imagining and finding comfort and belonging. Another partner providing helpline services to women in domestic abuse situations developed a small language model trained only on a tightly curated set of materials, offering first-response support to women in crisis. This agent-based chatbot did not aim for scale, memory or universal applicability; instead, it created a specific informational pathway for those in distress.

A liveability framework begins by looking at those most affected by automated decision-making and inviting them to define what their information looks like and how it subscribes to their logics of interpretation and verification. The capacity of community-driven datafication projects to produce such alternative frameworks – which resist and remain unintelligible to dominant AI logics – opens possibilities for recognising stories, emotions and embodiment as legitimate epistemic practices. A liveability question to interdisciplinarity would be to interrogate the very process of datafication, dismantling structures that cannot accommodate responsive, grounded and flexible categories of information beyond conventional data.

4.2 Making meaningful access

Driven by ICT4D narratives that look at the Majority Worlds as “disconnected” and thus fetishise universal access as a single focus point, a large effort to create access infrastructures and ecosystems often ignores the risks and harms of digital access. There is little attention to the transformations demanded of individuals to have access, the potential risks of surveillance and targeting that come with access, and the lack of legal and regulatory systems to offer protections and minimise harms. The idea that access will resolve inequity often neglects that access to AI is mirrored by access by AI, making precarious populations even more vulnerable as their lives get inflected by automated systems.

Across multiple conversations, a recurring recognition was that access is not merely a point of empowerment but also a site of vulnerability – especially in contexts without robust data protection and privacy regimes. In many spaces, having access becomes the same as being accessed. A collective thought experiment reframed this problem by asking what collective access might look like: not simply giving access to collectives, but de-individuating access so that digital identities and risks are distributed. Partners cited everyday practices in parts of sub-Saharan Africa where people without personal devices use cybercafés and shared access points, often going in small groups and operating social media accounts or apps from a common terminal. Being together – having real-life advisors present as online decisions are made – functioned as a form of safety-in-practice that challenged the presumption of solitary, individualised access.

Queer practitioners also foregrounded the risks of facial recognition tools used for “outing,” proposing that access should be opt-in rather than assumed: presence in a space or platform would not constitute consent for data collection or scanning; explicit consent would be required, paired with a meaningful right to be forgotten. These discussions did not reject access; they redefined it through liveability – prioritising safety, agency and collective protection over sheer connectivity.

More importantly, partners in FIRN helped us understand that “disconnection” is a misnomer. Many who lack devices or high-speed internet are still embedded in the broader AI supply chain: from extractive labour and resource mining to content moderation and gig work. In our approach to interdisciplinarity around access, then, mapping the invisible bodies and erased communities implicated in AI systems – but hidden beneath the gentrified idea of access-as-usage – becomes essential.

4.3 Interrogating infrastructures

AI infrastructures are expansive. They include a combination of hardware, software and wetware – physical computation infrastructure, algorithmic and data-driven infrastructure, and human inputs and labour. In conversations on interdisciplinarity, the focus on material devices and visible deployments often erases the bodies and communities deeply impacted by, and working within, these systems.

Partners in FIRN have brought forward different understandings of what counts as “infrastructure” when seen through the lens of liveability. From India, a research process revealed how non-literate women, though owning mobile phones, were unable to engage meaningfully because their keyboards and interfaces were in English. Their access depended on school-going children who acted as mediators – becoming the living interface between device and user, enabling access to information, pleasure and connection that would otherwise be denied. Here, language, literacy and familial relationships emerged as infrastructural resources, invisible to conventional AI frameworks yet critical for equitable participation.

Another partner working with queer communities living under conditions of genocide and erasure described how their vulnerability was compounded by external contexts in which their land and communities were being displaced. These conditions shaped their engagement with AI technologies in ways that could not be fully captured by standard interdisciplinarity models, which tend to frame infrastructure in technical or economic terms. For them, land, safety and communal continuity were infrastructural foundations for any technological interaction.

Such insights shift the conversation on governance and ownership. They reframe control over AI not only as a question of who owns servers or platforms, but also as a question of who safeguards the conditions – linguistic, cultural, ecological – that make technological interaction possible in the first place. Balancing local stewardship with participation in broader AI systems requires recognising these invisible infrastructures as central, not peripheral, to design and governance. This is where liveability offers a distinctive orientation: insisting that infrastructures are not merely technical backbones but the intertwined material, social and cultural conditions that sustain life in the face of technological change.

5. Just getting things done – a pause

I have argued that the dominant framing of interdisciplinarity in AI research risks recentralising disciplinary authority and displacing the lived realities of those most affected by technological harms. It is a throwback to Michel Foucault’s (Reference Foucault, Burchell, Gordon and Miller1995) analysis of disciplinarity as a “technique of modern power” that structures how knowledge is produced, who counts as an expert, and what forms of inquiry are deemed valid. When interdisciplinary collaborations merge within university settings tethered to metrics, funding priorities, and institutional hierarchies, they risk reinforcing rather than dismantling these structures. Disciplinary formations are both productive and repressive, and the work of interdisciplinarity rarely exceeds these limitations. Even when it appears transgressive, it disciplines the subject of scholarship through multiple hierarchies and exclusions.

In offering liveability as a framework for doing interdisciplinary work without making interdisciplinarity the object of inquiry, I do not dismiss all forms of interdisciplinarity. Indeed, some feminist, decolonial and critical approaches already resist the logics I critique, working from commitments to situated knowledge, equity and justice. My intervention is aimed at the managerial, tokenistic and extractive forms that too often dominate AI ethics and governance spaces.

Drawing from Melissa Gregg’s (Reference Gregg2018) Counterproductive, which examines how life, labour and longing are reorganised around the mantra of “getting things done,” I argue that current interdisciplinary and multistakeholder research is similarly organised through logics of extraction, optimisation and efficiency embedded in AI technologies. The gravitational pull of these logics is oppressive, keeping us bound to managerial frameworks that invest in bridging disciplines over cultivating relationships, in methodological purity over political urgency, in preparing for survival over building for liveable futures.

Beginning with harms and establishing that AI technologies create and naturalise conditions of harm – especially for women and queer communities – allows liveability to emerge as a challenge to these logics. To centre liveability is to embrace difficulty, interruption and divergence. Feminist commitments to justice insist that we ask not only what works, but for whom, and at what cost.

I do not offer liveability as a universal theory or uniform framework. It is a situated, speculative method for reorienting interdisciplinary approaches in AI scholarship. It invites researchers and knowledge stakeholders to ask what makes knowledge practices liveable. It demands a pause – to consider how we show up with care in spaces of interdisciplinary tension. It resists the compulsion to resolve difference too quickly, something algorithmic structures of AI do all too easily, and instead dwells in slow, collective transformation that replaces “getting things done” with doing things otherwise.

Interdisciplinarity remains part of my own practice; collaboration across difference is unavoidable in navigating the uneven terrain of AI development. Yet if collaboration is pursued only for quick solutions to urgent questions, it risks reinforcing the very epistemic and material practices of AI that we seek to question. Liveability offers another compass: not towards consensus but towards solidarity, not towards efficiency but towards equity, not towards purity but towards possibility. In refusing the rush towards resolution, we might find ourselves in the company of those still collectivising, organising, imagining and enacting futures worth living in.

Acknowledgements

This paper is deeply indebted to many conversations with the key members of the Feminist Internet Research Network and the research partners who form this growing community. I owe gratitude for political solidarity and intellectual inspiration, particularly to Dr. Tigist Shewarega Hussen, whose framing and holding of the network has allowed for the conversations around liveability to emerge and to Namita Avriti Malhotra, whose leadership and generous critical inputs anchor theory in praxis. Networks are infrastructures, and the support and championing of this work by Ruhiya Seward at IDRC make this research and inquiry possible. This paper has also benefited from all the participants at Rightscon 2025 who joined the day-long workshop on the expanded scope of TFGSV organised by Association of Progressive Communication, and from engagements with the narrative change experts at Dakila, The Philippines.

Funding statement

I have received no funding for this article. However, as the academic advisor to the Feminist Internet Research Network, I have received an honorarium to supervise and work with the different projects, and this paper draws from that experience.

Competing interests

None

Nishant Shah is Professor of Global Media & Culture at Chinese University of Hong Kong, directing the Digital Narratives Studio and Masters in Global Communication. He is Faculty Associate at the Berkman-Klein Center for Internet & Society, Harvard University and a knowledge partner to the Feminist Internet Research Network, and Take a Pause Fund. His work sits at intersections of digital technologies, feminist/queer theory, and social justice, focusing on south–south connections through projects like the Generative AI Network (GAIN), building anticipatory skills for collective futures, Possible AIs Speculative Technology Institute, equipping movement builders to imagine and build alternative AIs.

References

Abbonato, D., Bianchini, S., Gargiulo, F., & Venturini, T. (2023). “Questioning the impact of AI and Interdisciplinarity in science: Lessons from COVID-19.” https://arxiv.org/abs/2304.08923.Google Scholar
Ahmed, S. (2023). The feminist killjoy handbook. Penguin Random House.Google Scholar
Barry, A. and Born, G. (Eds.). (2013). Interdisciplinarity: Reconfigurations of the social and natural sciences. Routledge.CrossRefGoogle Scholar
Barry, A., Born, G., & Weszkalnys, G. (2008). Logics of interdisciplinarity. Economy and Society, 37(1), 2049. https://doi.org/10.1080/03085140701760841.CrossRefGoogle Scholar
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Polity Press.Google Scholar
Berlant, L. (2011). Cruel optimism. Duke University Press.Google Scholar
Bowker, G. C. (2005). Memory practices in the sciences. MIT Press.Google Scholar
Bowker, G. C. (2018). Sustainable knowledge infrastructures. In Anand, N., Gupta, A., & Appel, H. (Eds.), The promise of infrastructure (pp. 203–28). Duke University Press. https://doi.org/10.2307/j.ctv12101q3.12.CrossRefGoogle Scholar
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 115.Google Scholar
Chakravorty Spivak, G. (2012). An aesthetic education in the era of globalization. Harvard University Press.Google Scholar
Chun, W. H. K. (2016). Updating to remain the same: Habitual new media. MIT Press.CrossRefGoogle Scholar
Code, L. (1995). Rhetorical spaces: essays on gendered locations. Routledge.Google Scholar
Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.CrossRefGoogle Scholar
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.Google Scholar
Foucault, M. (1995). Governmentality. In Burchell, Graham, Gordon, Colin, & Miller, Peter (Eds.), The Foucault Effect: Studies in Governmentality (pp. 87104). Chicago: University of Chicago Press.Google Scholar
Ganesh, M. I. (2025). Auto-Correct: The fantasies and failures of AI, ethics, and the driverless car. ArtEZ Press.Google Scholar
Gregg, M. (2018). Counterproductive: Time management in the knowledge economy. Duke University Press.Google Scholar
Gumbs, A. P. (2018). M archive: After the end of the world. Duke University Press.CrossRefGoogle Scholar
Halberstam, J. (2011). The queer art of failure. Duke University Press.Google Scholar
Haraway, D. 1988. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14 (3), 575599. https://doi.org/10.2307/3178066CrossRefGoogle Scholar
Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.Google Scholar
Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press.Google Scholar
Jasanoff, S. (2007). “Technologies of humility”. Nature, 450 (7166), 33. https://doi.org/10.1038/450033a.CrossRefGoogle ScholarPubMed
Jasanoff, S., & Kim, S.-H. (Eds.). (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press.CrossRefGoogle Scholar
Kafer, A. (2013). Feminist, queer, crip. Indiana University Press.CrossRefGoogle Scholar
Kusters, R., Misevic, D., Berry, H., Cully, A., Le Cunff, Y., Dandoy, L., Díaz-Rodríguez, N., Ficher, M., Grizou, J., Othmani, A., Palpanas, T., & Faure, G. (2020). Interdisciplinary research in artificial intelligence: Challenges and opportunities. Frontiers in Big Data, 3, 577974. https://doi.org/10.3389/fdata.2020.577974.CrossRefGoogle ScholarPubMed
Lorde, A. (1984). The transformation of silence into language and action. In Sister outsider: Essays and speeches (pp. 4044). Crossing PressGoogle Scholar
Losh, E., & Wernimont, J., eds. Bodies of Information: Intersectional Feminism and Digital Humanities (Minneapolis: University of Minnesota Press, 2018).CrossRefGoogle Scholar
Losh, E., & Wernimont, J. (Eds.). (2019). Bodies of information: Intersectional feminism and the digital humanities. University of Minnesota PressCrossRefGoogle Scholar
McLean, N. (2022). Feminist internet research network: Meta-research project report. Association for Progressive Communications. https://www.apc.org/en/pubs/feminist-internet-research-network-meta-research-project-report.Google Scholar
Mohanty, C. T. (2003). Feminism without borders: Decolonizing theory, practicing solidarity. Duke University Press.Google Scholar
Nowotny, H. (2003). “Democratising expertise and socially robust knowledge”. Science and Public Policy, 30 (3), 151–56. https://doi.org/10.3152/147154303781780461.CrossRefGoogle Scholar
Parera, M. (2022). Feminist artificial intelligence: Towards a research agenda for Latin America and the Caribbean. Feminist AI. https://feministai.pubpub.org/lachub-en, Accessed October 11, 2025.Google Scholar
Puar, J. K. (2017). The right to maim: Debility, Capacity, Disability. Duke University Press.Google Scholar
Puig de la Bellacasa, M. (2017). Matters of care: Speculative ethics in more than human worlds. University of Minnesota Press.Google Scholar
Shah, N. (2018). “Or the re-making of digital narratives in times of ChatGPT”. European Journal of Cultural Studies, https://journals.sagepub.com/doi/10.1177/13675494231223572.Google Scholar
Sharma, S. (2020). “A manifesto for the broken machine”. Camera Obscura, 35 (2), 171–79. https://doi.org/10.1215/02705346-8359652.CrossRefGoogle Scholar
Tronto, J. C. (1993). Moral boundaries: a political argument for an ethic of care. Routledge.Google Scholar
Venkatasubramanian, S., Bliss, N., Nissenbaum, H., & Moses, M. (2020). Interdisciplinary approaches to understanding artificial intelligence’s impact on society. arXiv, https://arxiv.org/abs/2012.06057.Google Scholar
Zeller, F., & Dwyer, L. (2022). Systems of collaboration: Challenges and solutions for interdisciplinary research in AI and social robotics. Discover Artificial Intelligence, 2, Article 12. https://doi.org/10.1007/s44163-022-00027-3.CrossRefGoogle Scholar