Hostname: page-component-cb9f654ff-9knnw Total loading time: 0 Render date: 2025-09-04T05:43:11.177Z Has data issue: false hasContentIssue false

The normative body and ‘stupid AI’: challenging compulsory able-bodiedness in human-AI interaction

Published online by Cambridge University Press:  27 August 2025

Jinxiu Rebecca Han*
Affiliation:
The University of Texas at Dallas, USA
Anne Balsamo
Affiliation:
The University of Texas at Dallas, USA

Abstract:

This research examines, during the human-AI interaction process, how generative AI’s depiction of human bodies reflects and perpetuates able-bodied norms, positioning disabled or grotesque bodies as “errors.” Through a feminist and disability studies lens and employing archival research and visual analysis, this research challenges traditional notions of bodily normativity, advocating for inclusivity in AI-generated imagery. It underscores how labeling nonconformity as an error perpetuates able-bodied standards while erasing the visibility and autonomy of disabled bodies. By critiquing generative AI’s role in reinforcing societal norms, this study calls for reimagining human-AI interactions with a shift in perception and advocates for an approach that neither devalues nor excludes disabled bodies.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Generative artificial intelligence (Generative AI) has become deeply integrated into our daily lives. It is a system that creates content—such as text, images, music, or videos—based on input data and learned patterns. This evolution in human-AI interaction marks a transformative moment in digital culture (Reference CrawfordCrawford, 2022), enabling unprecedented creative possibilities while simultaneously raising ethical questions about bias embedded in code. In this paper, I focus on how decision-making in human-AI interactions involving the generation of human or human-like body images is influenced by the collective opinions of compulsory able-bodiedness in social norms. My research challenges the abled body assumptions that structure this decision-making process, seeking to develop a critical framework for understanding how collective bodily normativity is conceived and perpetuated. I believe that the current human-AI interaction to assemble images of human or human-like bodies provokes judgments about how the human body should look. These judgments are widely discussed on the internet, often labeled as flaws of “stupid AI,” and lead to various efforts to modify or circumvent them, ultimately reinforcing the notion of an ideal, able-bodied standard (Reference SiebersSiebers, 2011). My research methodology combines archival research and visual analysis of generative AI images on the internet. Then I analyse efforts on social media to either enforce the application of specific prompts or use other operational tools to eliminate “error” body images generated by generative AI. Afterward, I position my research within a feminist and disability studies framework and aim to provoke reflection on what is considered “normal” and what is not, highlighting how the normalization of bodies leads to the stigmatization of disabled bodies (Reference HallHall, 2011). Then my research critically interrogates the consequences of this human-AI interaction, which has further marginalized minority groups and rendered them invisible in digital visual culture on the Internet. By fostering interdisciplinary dialogue between technologists, scholars and users, this research underscores the urgency of reimagining AI development in ways that actively resist bodily normativity and promote diverse, equitable representations in digital visual culture.

2. Collective standard for AI-generated body images

In looking at Generative AI bodily images, people quickly distinguish between normal and grotesque. Generative AI may produce overly elongated limbs, misplaced facial features, or excessive numbers of fingers. Generative AI attempts to create human-like body shapes based on data about body appearances, but it lacks an understanding of “biological logic” (Reference ChaykaChayka, 2023). As a result, it produces body parts that resemble human forms but cannot replicate their functional realities. In this context, the “silly flaws” of AI-generated body images are often labelled as flaws of AI (Reference DixitDixit, 2023). In addition, BBC News Science Focus directly stated that creating a “normal-looking hand” is an “impossible task” for AI. The YouTube video Why AI art struggles with hands illustrates the three different reasons why AI could not deal with complex human body parts when generating images like hands, teeth, and abs (Reference EdwardsEdwards, 2023). Producer Phil Edwards points out three reasons: the data size and quality for complex human body parts like hands do not have sufficient data sets to train AI to “learn.” Also, the way for each posture of the human body is not specific when training, Finally, the most important point he brings out in the video is “the low margin of error” when AI generates human body images (Reference EdwardsEdwards, 2023). If AI is asked to generate an apple (Figure 1), AI will look for the database to find the apple-like images. Then, it will produce its image of an apple with apple-like shapes, apple-like color, and apple-like dimensions. Even sometimes the apple is not of regular shape or color, the tolerance of the erroneous is still high In Figure 1 generated by DALL.E, some of the apples on the right side had too regular textures to read as normal apple, but still acceptable. However, human bodies present a much greater challenge because they require a much lower margin of error. This refers to the collective opinion that even small inaccuracies or deviations from the expected appearance of human body parts are highly noticeable and often perceived as significant flaws. On X (formerly Twitter), a well-known graphic designer, @TopLobsta (2023), illustrated: “Every iteration of AI art programs …with one significant error: the hands. Specifically, the fingers” has also highlight this recurring idea of “the low margin of error” in generative imaging process in every human-AI interaction, shows the stereotype expectation of what a human should look like. Discussion on the internet shows that Artificial intelligence is influencing peoples decision-making in daily life in invisible ways. (Reference Diefenbach, Christoforakos, Ullrich and ButzDiefenbach et al., 2022). Scholars bring out the concern of the bias in Artificial Intelligence (Reference HarawayFang, et al., 2024), illustrate how it increases the invisibility of gender, race, and underrepresented groups in digital spaces when generating them. (Reference Hall and EllisNdaka et al., 2024; Reference Ndaka, Ratemo, Oppong and MajiwaHall & Ellis, 2023). Research shows that artificial intelligence is disadvantageous to underrepresented minority groups (Reference Gwagwa, Kazim, Kachidza, Hilliard, Siminyu, Smith and Shawe-TaylorGwagwa, 2021). It might exclude minority group perspectives from algorithmic sampling (Reference Zou and SchiebingerZou, 2018). The circumstance leads to “the feedback loop mechanism that the gender-biased results are fed back to the system, thereby deepening the biases” (Zou, Reference Zou and Schiebinger2018, p.324). Although AI bias has been explored in the history, philosophy, and sociology of computer science and artificial intelligence, there is still a need for broader exploration of AI-generated body images in disability studies. This is necessary to prevent the Generative AI industry from using art and creative expression to normalize the hierarchy values and cultural norms embedded in it (Reference SiebersSiebers, 2001).

Figure 1. DALL.E generates apples

3. AI’s decision-making process and human’s post-image manipulation

Take DALL.E as the example, how it works in text-to-image process of generate humane body image is to associate the prompt text with images, learning through a diffusion model that starts from random noise and gradually refines it into an orderly image by applying a series of small changes step by step, then decode it to generate the low-resolution image (Reference SilvermanSilverman, 2024). It then uses optimization algorithms to enhance clarity and resolution, ultimately producing a high-quality image. From the perspective of generative AI, when a person provides a prompt to DALL-E (Figure 2.), they are essentially providing the beginning of a sequence and asking the model to continue it. The AI program then executes a series of instructions, passing information through a sequence of decision trees. Each decision tree guides and redirects the information, adding new instructions until it reaches the endpoint requested by the user (Reference PennefatherPennefather, 2023). In this process, the methods employed by AI enable it to perform powerful data-matching tasks based on vast amounts of aggregated data with just a simple click from people. In the process of AI generating images, both conscious and unconscious decisions are actually incorporated (Reference Ndaka, Ratemo, Oppong and MajiwaNdaka et al., 2024). The decision-making process for AI to select which pixel should place in where is constrained by the data it received, which decide also by the mainstream social agencies, which define impressions such as access, values, and socio-economic classifications, and influence the technological structures that implement these impressions within society (Reference BalsamoBalsamo, 2011). AI inherently lacks free will and operates within the constraints of its design (Reference HristovHristov, 2016). The images it generates are still shaped by biases embedded in the algorithms and the collective societal influences that individuals experience.

Figure 2. How DALL.E works

Consequently, if a generative image lacks an “ideal able-bodied” appearance, the generative technology that created it is dismissed as “stupid” AI. The critique of AI can be seen from statistical data in the article Detecting Human Artifacts from Text-to-Image Models (Reference Wang, Zhang and ZhangWang, 2025), which highlights the way in which text-to-image generation models often produce images containing grotesque human bodies. Researchers use 37,000 images as dataset to annotate and train the Human Artifact Detection Models (HADM), in order to improve the diffusion model or correct the errors of generated images. This implies that the technology lacks the “common sense” needed to produce a normative body image. However, on the internet, the negative comment toward the generative imaging process highlights a deeper collective expectation: technology is not only machines and devices but also the social, economic, and institutional force that should conform to and replicate societal norms of body appearance (Reference BalsamoBalsamo, 1997). These norms, often influenced by ableism and aesthetic ideals, are rarely questioned when critiquing and distinguishing AI’s flaws. Here’s a critical point: as a digital application without a “ground truth”, AI lacks lived experience, cultural context, or innate understanding. Instead, it relies on training data like images, texts, and patterns input into the system to generate outputs. In its early development, AI operated without constraints, resulting in the generation of numerous body images considered “incorrect” by conventional standards. This allowed it to produce non-traditional, asymmetrical, or entirely unconventional body images that fall outside what society deems “normal.” These outputs were labelled as “errors,” like the example in my previous section, the infamous issue of early AI models struggling to generate “correct” human hands. While this issue was frequently mocked on social media on the Internet, in reality, it reflects society’s deep discomfort with deviations from “normal” expectations, underscoring how we equate “difference” to “failure” (Reference HallHall, 2011). However, the AI failures cannot be understood as accidents that we can fix or learn from, but they need to be understood as complex social realities that are defined by the economic, social, and political relationships embedded within and around our technologies. (Reference BarassiBarassi, 2024) Yet we should understand the generated body image not as technical failure but rather a divergence from human expectations of normalization (Reference HallHall, 2011), which reveals societal biases on the abled body standards.

What is supposed to be the ideal body image that AI should generate? I believe such normative responses are strongly conditioned by the flood of body images in internet visual culture and discussion of digital bodies. According to Guy Debord, in a society where representations hold significant power and aesthetic influence, people conform to the spectacle (Reference DebordDebord, 1977). The public needs to conform to the dominant spectacle of the normal body image in order to gain acceptance or validation, despite the illusion of having the freedom to generate any body image they desire. When the expectation was not able to proceed from the end of algorithmically from Generative AI, since it is the “stupid AI” that lacks the “common sense” needed to produce a normative body image; the “smart” human, on the other hand, can use other strategies in the generative process to gain the “correct and ideal” body image based on the collective acknowledgment of normativity, followed by subsequent adjustments and the post-image manipulation of human-AI interaction. In the subsequent image processing, the decision-making process regarding Generative AI images involves human judgment, which is influenced by the naturalized notion of an able-bodied standard. Even when AI generates images of disabled or “non-ideal” bodies, individuals influenced by collective social values tend to treat these representations as abnormalities that need to be corrected during the decision-making process of using the image or refine the image. The intentionally designed generated bodily image representations have, in turn, intensified the enforcement of socially normative images. This perpetuates an ideal AI model that excludes diverse, dynamic, and non-normative bodies. Therefore, inhuman-AI interactions involving generative AI applications to create images of human-like bodies, the decision-making of AI in picking the pixels to generate the image, and the following post-image design is a discursive process (Reference BalsamoBalsamo, 1997). It is influenced by the expectation that digital body representations should be enhanced to convey at least a conventional and “normal” appearance.

4. Discussion

Speaking of the normativity, my research is grounded in a feminist and disability studies framework to explore the implicit regulations that regulate the body imaging processes to have the standard representation of the human body. Feminist scholars illustrate that feminist theory is “intersections with queer, postcolonial, critical race, and disability studies, offers technology studies a critique of knowledge production even when women using technology is not the object of study.” (Shaw, Reference Shaw2014, p.273) Feminism deconstructs social inequalities from the fundamental perspective of how biases are generated. (de Beauvoir, 1949), this perspective provides a vital foundation for analysing how bodies are constructed, normalized, and performed within sociopolitical contexts. It emphasizes that generative bodily representation is not a neutral or natural process but deeply constructed. Feminist scholarship has explored how patriarchy perpetuates ideals of the body, often aligned with hegemonic norms such as whiteness, slenderness, youth, and more importantly, able-bodiedness (Reference WolfWolf, 2015). N. Katherine Hales (1999) points out that the enacted and the represented bodies are the contingent production that mediates by the technology are already entwined with the production of identity and cannot be separated by the mainstream standard. I use feminism as a key framework in analysing Gen AI-generated images to deconstruct how social standards influence the logic behind generative body imaging. Using this framework, I interrogate which aspect is prioritized, what issue is raised, and what implication that are ultimately produced.

4.1. Which aspect is prioritized

According to feminist technoculture scholar Anne Balsamo, the virtual body is neither simply a surface upon which are written the dominant narratives of Western culture, nor a representation of cultural ideals of beauty or sexual desire. It has been transformed into the very medium of cultural expression itself, manipulated, digitalized, and technologically constructed in virtual environments. (Reference BalsamoBalsamo, 1997) The collective cultural expression of bodily representation often manifests by assigning value to bodies based on their adherence to or deviation from societal standards. In other words, during the process of generating body images, bodies that do not conform to mainstream ideals are suppressed by an unwritten yet explicit set of rules that dictate whose bodies are “worthy” of positive representation. (Griffin & Lopez, 2022). And the prioritized is the body as it is understood through scientific, philosophical, and aesthetic representations—embodies these cultural conceptions, including norms of beauty, models of health, and ideals of physicality (Reference BordoBordo, 2013). The default body representation operates as practical rules to shape and regulate the living bodies to conform, turning them into socially adapted entities, and reinforcing the alignment of physicality with broader cultural expectations. Lisa Nakamura (2013) pointed out that in the process of generative AI creating representations of bodies, the transformation of complex human self-identities into algorithmic formats inevitably simplifies and flattens the diversity of human experiences in cyberspace. The simplification here shows this default universal representation (Reference de la Bellacasade la Bellacasa, 2010) that is often prioritized over other forms of body expression, to portray standard ideal body.

4.2. What issue is raised

In the process of AI-generated human bodily text-to-image process, four distinct stages highlight the influence of societal values on decision-making for both AI and human (Figure 3.):

  • Thought transition from ideas to textual outputs (Meaning-making from human thought to text).

  • AI converts text into images (Generate image from data set to actual image).

  • Evaluating whether the generated image (Judgment whether the image conforms to the social standard or not).

  • Looping feedback prompt AI to correct perceived “error” body in the image (Further adjustment of generated bodily image).

Figure 3. Cognitive processes in design decision-making of Generative AI bodily image

Each of these stages involves the circulation and transformation of meaning. Whether transitioning from text to image or refining one image into another, technology now transforms the body into nothing more than discourse (Reference BalsamoBalsamo, 1997). That is to say, the generative body process is a discursive practice. The creation of body images is imbued with specific value biases at every stage: from the initial conceptualization text to generate image, the thought has been transformed into the understandable prompt text for AI to “understand”; from the AI’s collection of image databases to the decide where to positions each pixel to generate the image, the algorism framework play roles; from the individual’s judgement whether the image against societal norms or not, social norm has affect individual’s decision; from the uncritical modification of areas deemed “error” or even “failure”, the expectation of “normal” emerged. Here the issue is raised, which is the bias of abled-bodiedness. Generative body process can be studied as a discursive construction, it is embedded with various biases, including semantic, value, cultural, and ideological biases in every meaning-making process (Reference BalsamoBalsamo, 1997). As Wendy Chun (2005) aptly puts it, digital software lacks transparency, paradoxically concealing its operations and computational processes beneath a facade of simplicity and usability while in reality, creating complex and often invisible systems of control. With just a simple click, people can generate the images they desire. However, what often goes unexamined is that the true force controlling the generative process is the societal standard of the able-bodied ideal.

Disability studies is dedicated to creating a space to challenge compulsory able-bodiedness (Reference McRuer and BérubéMcRuer & Bérubé, 2006). In disability studies, it brings out the “ideology of ability”, critiquing that societies construct ability as the norm while viewing disability as a problem to be fixed, however, physical disability should not be seen as a “defect” but rather as a “difference” (Siebers, 2008). Here this research challenges the marginalization of disabled bodies in generative representations and questions society’s obsession with normalcy. Society labelling some images as failures is not only criticizing the technology that generated it as failure, but also repeatedly emphasizes the visual body representation of the disabled group as “failure” or “fault”. This perspective is rooted in human cognition, where the “normal” body is socially defaulted. In addition, people attempt to erase “errors” in generative body representation become normalized, is because news, social media and even scholars had framed the production of “error” bodies as a failure of AI, without even questioning how and why abled body is constructed and naturalized, and the implication of erasure of disabled bodies in digital visual representation. In this context, the erasure of diverse bodily representations in AI-generated images reinforces societal expectations of normalization, denying visually engagement with the possibility of disabled bodies.

4.3. What implication ultimately produced

According to disability study scholar Tobin Siebers (2011, p.4): “Disability is not a physical or mental defect but a cultural and minority identity.” The disabled community itself is a labelled group due to physiological differences from the non-disabled group. It should be understood as the diversity of human body function. Disability scholar Siebers illustrates that Disability triggers a societal fear of bodily fragility (Reference SiebersSiebers, 2011), leading to efforts to erase potential chaos and change, with the expectation that disabled bodies should return to an idealized form of perfection. Disabled bodies are unstable and ever-changing (Reference HallHall, 2011); in AI-generated imagery, they are often perceived as unstable errors in the generative process. However, as living organisms, bodies are inherently dynamic, chaotic, and fragile. The fear of physical disability leads society to avoid representing disabled bodies (Tobin, 2011). However, in the physical world, disability represents the embodied experiences and suffering of a marginalized group. Such experiences should neither be erased at the visual level nor repeatedly emphasized as defects in online body representations. Normalization and able-bodiedness of grotesque bodies as negative contribute to the “continued domination and marginalization of people with disabilities” (Reference SchalkSchalk, 2013). Even though technology offers new possibilities, we must remain aware that power is always embedded within digital technologies and the discourses surrounding them (Reference ChunChun, 2006). Moreover, technology, especially the way algorithms generate outcomes, is often opaque, unexplainable, and proprietary. (Reference BurrellBurrell, 2016) Generative AI is this “black box” that is seldom questioned or deconstructed even though it been used widely (Reference LatourLatour, 1987). It often introduces forms of oppression. This oppression is driven not by individual, unconscious syndromes, but by social ideologies that are embodied. By privileging certain types of bodies, generative AI contributes to the erasure of underrepresented disability groups from digital visual narratives, further exacerbating their invisibility in cultural discourse. In Practice of Looking, the author brings out the visuality as the way to explain the invisibility of the disabled group in digital visual culture it illustrating that vision is shaped through social context and interaction. Visuality calls our attention to how the visual is caught upon power relations that involve the structure of the visual field as well as the politics of the image (Sturken & Cartwright, 2021). And the visuality absence of disabled bodies further intensifies their marginalization, depriving them of autonomy and presence in the public sphere. Meanwhile, this exclusion also intersects with other forms of oppression, such as racism and sexism, amplifying the invisibility of marginalized groups (Reference SiebersSiebers, 2011). Therefore, visualization technologies no longer simply mimic or represent reality, they virtually recreate it (Reference BalsamoBalsamo, 1997). The simulations created by Generative artificial intelligence in turn, replace the reality they are meant to represent, becoming more significant than the reality itself (Reference Baudrillard and GlaserBaudrillard & Glaser, 1994), which exacerbates the able body standard and invisible the disabled group. This leads to a state he termed “hyperreality” where the boundaries between the real and the artificial become increasingly blurred, while the artificial here, not only refer to the artificial intelligence that creates the generative body image, but also refers to the artificial choice from human, to erase disability in digital visual culture.

5. Conclusion

This research aims to provoke reflective thinking of what is supposed to be “normal” and what is not, therefore challenging traditional social norms that privilege the able-bodied. While generative AI offers unprecedented creative potential, human-AI interaction’s inherent reliance on collective cultural biases leads to the normalization of body standards while marginalizing disabled bodies. The labelling of AI-generated body image as “errors” reflects deeper societal discomfort with deviations from normative ideals and exposes the biases embedded in collective human feedback loops that drive AI development. By examining the construction of generative body imaging from the perspective of feminist disability study, we see how emerging technology influences collective perceptions of the decision-making process in discursive practices steeped in cultural, ideological, and abled values. Ultimately, the normalization of able-bodied standards and the erasure of disabled bodies within generative AI visual culture represent a continuation of the stigmatized and marginalization of disabled groups. In the rapid development of AI, many scholars have already raised concerns about AI and expressed caution regarding algorithms. Famous scholar Donna J. Haraway has emphasized in A Cyborg Manifesto, calling for the reconstruction of identity, no longer governed by naturalism and taxonomy, but rather by affinity, her theory challenges the neutrality of technology (Reference HarawayHaraway, 2016), and brings attention to how human decisions and cultural contexts shape social realities. Regarding further recommendations for AI, from an engineering design perspective, some scholars have called for increasing algorithmic transparency, develop fair decision rules, (Reference Weyerer and LangerWeyerer & Langer, 2019), diversify the underrepresented groups in AI algorithm development (Reference Collett and DillonDillon & Collett, 2019). Moreover, from the users’ perspective, the stigmatization of disabled bodies is rooted in societal expectations of the able-bodied ideal, and society needs to reevaluate human-AI interactions, encourages a shift in perception, and advocates for an approach that neither devalues nor excludes disabled bodies in the imaging process. Users could make disabled people visible in more AI-generated digital visual patterns, rather than simply erasing them. People could embrace and accept the uncertainty of bodily representation, rather than using prompts as a feedback loop to reinforce the notion that “disability equal error” in algorithmic inputs.

References

Baudrillard, J., & Glaser, S. F. (1994). Simulacra and simulation. University of Michigan Press.Google Scholar
Balsamo, A. (1997). Technologies of the gendered body: Reading cyborg women. Duke University Press.Google Scholar
Balsamo, A. M. (2011). Designing culture: The Technological Imagination at Work. Duke University Press.Google Scholar
Barassi, V. (2024). Toward a theory of AI errors: Making sense of hallucinations, catastrophic failures, and the fallacy of Generative AI. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.ad8ebbd4 CrossRefGoogle Scholar
Beauvoir, S. de. (2015). The Second sex. Vintage Classic.Google Scholar
Bordo, S. (2013). The Body and the Reproduction of Femininity. Unbearable weight feminism, western culture, and the body Susan Bordo. Univ. of California Press.Google Scholar
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512 CrossRefGoogle Scholar
Chayka, K. (2023, March 10). The uncanny failure of a.i.-generated hands. The New Yorker. https://www.newyorker.com/culture/rabbit-holes/the-uncanny-failures-of-ai-generated-hands Google Scholar
Chun, W. H. K. (2006). Control and freedom: power and paranoia in the age of fiber optics. MIT Press.Google Scholar
Chun, W. H. (2005). On software, or the persistence of visual knowledge. Grey Room, 18, 26-51. https://doi.org/10.1162/1526381043320741 CrossRefGoogle Scholar
Weyerer, J., & F. Langer, P. (2019). Garbage in, Garbage Out: The Vicious Cycle of AI-Based Discrimination in the Public Sector. Proceedings of the 20th Annual International Conference on Digital Government Research, 509-511. https://doi.org/10.1145/3325112.3328220 CrossRefGoogle Scholar
Collett, Clementine, & Dillon, Sarah. (2019). Ai and gender: Four proposals for Future Research.Google Scholar
Crawford, K. (2022). Atlas of AI: Power, politics, and the planetary costs of Artificial Intelligence. Yale University Press.10.56315/PSCF3-22CrawfordCrossRefGoogle Scholar
Debord, G. (1977). Society of the spectacle. Black and Red.Google Scholar
de la Bellacasa, M. P. (2010). Matters of care in technoscience: Assembling neglected things. Social Studies of Science, 41(1), 85-106. https://doi.org/10.1177/0306312710380301 CrossRefGoogle Scholar
Diefenbach, S., Christoforakos, L., Ullrich, D., & Butz, A. (2022). Invisible but understandable: In search of the sweet spot between technology invisibility and transparency in smart spaces and beyond. Multimodal Technologies and Interaction, 6(10), 95. https://doi.org/10.3390/mti6100095 CrossRefGoogle Scholar
Dixit, P. (2023, January 31). AI image generators keep messing up hands. here’s why. BuzzFeed News. https://www.buzzfeednews.com/article/pranavdixit/ai-generated-art-hands-fingers-messed-up Google Scholar
Edwards, P. (n.d.). Why AI art struggles with hands. YouTube. https://www.youtube.com/watch?v=24yjRbBah3w Google Scholar
Fang, X., Che, S., Mao, M., Zhang, H., Zhao, M., & Zhao, X. (2023). Bias of AI-generated content: An examination of news produced by large language models. Sci Rep 14, 5224 (2024). https://doi.org/10.1038/s41598-024-55686-2 CrossRefGoogle Scholar
Griffin, M., Bailey, K. A., & Lopez, K. J. (2022). #bodypositive? A critical exploration of the body positive movement within physical cultures taking an intersectionality approach. Frontiers in Sports and Active Living, 4, 06-06. https://doi.org/10.3389/fspor.2022.908580 CrossRefGoogle Scholar
Gwagwa, A., Kazim, E., Kachidza, P., Hilliard, A., Siminyu, K., Smith, M., & Shawe-Taylor, J. (2021). Road map for research on Responsible Artificial Intelligence for Development (AI4D) in African countries: The case study of agriculture. Patterns, 2(12). https://doi.org/10.1016/j.patter.2021.100381 CrossRefGoogle Scholar
Hall, K. Q. (2011). Feminist disability studies. Indiana University Press; Combined Academic distributor.10.2979/4954.0CrossRefGoogle Scholar
Hall, P., & Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review, 47(7), 1264-1279. https://doi.org/10.1108/oir-08-2021-0452 CrossRefGoogle Scholar
Haraway, D. J. (2016). A cyborg manifesto. Manifestly Haraway, 3-90. https://doi.org/10.5749/minnesota/9780816650477.003.0001 CrossRefGoogle Scholar
Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.10.7208/chicago/9780226321394.001.0001CrossRefGoogle Scholar
Hristov, K. (2016). Artificial Intelligence and the Copyright Dilemma.Google Scholar
Latour, B. (1987). Science in action : how to follow scientists and engineers through society. Cambridge (Mass.): Harvard university press.Google Scholar
McRuer, R., & Bérubé, M. (2006). Crip theory: Cultural signs of queerness and disability. New York University Press.Google Scholar
Nakamura, L. (2013). Race in the construct and the construction of race: The “consensual hallucination of multiculturalism in the fictions of Cyberspace. Cybertypes: Race, Ethnicity, and Identity on the Internet, 61-85. https://doi.org/10.4324/9780203953365-3 CrossRefGoogle Scholar
Ndaka, A. K., Ratemo, H. A., Oppong, A., & Majiwa, E. B. (2024). Artificial Intelligence (AI) onto-norms and gender equality: Unveiling the invisible gender norms in AI ecosystems in the context of Africa. Trustworthy AI, 207-232. https://doi.org/10.1007/978-3-031-75674-0_10 CrossRefGoogle Scholar
Pennefather, P. P. (2023). 8. The Art of the Prompt. Creative prototyping with Generative AI: Augmenting creative workflows with Generative AI. Apress.Google Scholar
Siebers, T. (2001). Disability in theory: From social constructionism to the new realism of the body. American Literary History, 13(4), 737-754. https://doi.org/10.1093/alh/13.4.737 CrossRefGoogle Scholar
Siebers, T. (2011). Disability theory. University of Michigan Press.Google Scholar
Schalk, S. (2013). Metaphorically speaking: Ableist metaphors in feminist writing. Disability Studies Quarterly, 33(4). https://doi.org/10.18061/dsq.v33i4.3874 CrossRefGoogle Scholar
Silverman, D. (2024). Burying the Black Box: AI Image Generation Platforms as Artists’ Tools in the Age of Google v. Oracle. Federal Communications Law Journal, 76(1), 115+. https://link-gale-com.libproxy.utdallas.edu/apps/doc/A786227521/GBIB?u=txshracd2602&sid=bookmark-GBIB&xid=dc05db15 Google Scholar
Shaw, A. (2014). The internet is full of jerks, because the world is full of jerks: What feminist theory teaches us about the internet. Communication and Critical/Cultural Studies, 11(3), 273-277. https://doi.org/10.1080/14791420.2014.926245 CrossRefGoogle Scholar
Sturken, M., & Cartwright, L. (2023). Practices of looking: An introduction to visual culture. Oxford University Press.Google Scholar
Thelwall, M. (2018). Gender bias in machine learning for sentiment analysis. Online Information Review, 42(3), 343-354. https://doi.org/10.1108/oir-05-2017-0153 CrossRefGoogle Scholar
TopLobsta. (2023, March 31). I find it odd that every iteration of AI art programs always seem to accidentally generate an accurate portrayal of the user prompt with one significant error. the hands. specifically the fingers. PIC.TWITTER.COM/6PF6OZUZNR. Twitter. https://twitter.com/TopLobsta/status/1641843201878179841 Google Scholar
Wang, K., Zhang, L., Zhang, J. (2025). Detecting Human Artifacts from Text-to-Image Models. arXiv. https://arxiv.org/abs/2411.13842 Google Scholar
Why ai-generated hands are the stuff of nightmares, explained by a scientist. BBC Science Focus Magazine - science, nature, technology, Q&As. (n.d.). https://www.sciencefocus.com/future-technology/why-ai-generated-hands-are-the-stuff-of-nightmares-explained-by-a-scientist Google Scholar
Wolf, N. (2015). The beauty myth: How images of beauty are used against women. Vintage Books.Google Scholar
Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist — it’s time to make it Fair. Nature, 559(7714), 324-326. https://doi.org/10.1038/d41586-018-05707-8 CrossRefGoogle Scholar
Figure 0

Figure 1. DALL.E generates apples

Figure 1

Figure 2. How DALL.E works

Figure 2

Figure 3. Cognitive processes in design decision-making of Generative AI bodily image