1. Introduction
Artificial intelligence (AI) technology often reinforces societal biases, resulting in discriminatory outcomes across various domains (Halberstam, Reference Halberstam1991). For example, Amazon’s AI recruitment tool, designed to automate hiring processes, was found to disadvantage women by favouring male-dominated language due to biased training data, ultimately leading to its discontinuation (Andrews & Bucher, Reference Andrews and Bucher2022). In healthcare, AI applications, such as gynaecologic cancer detection, operate within binary gender frameworks, often misgendering transgender and non-binary patients (Taylor & Bryson, Reference Taylor and Bryson2016). Additionally, generative AI raises legal concerns, particularly in the proliferation of non-consensual sexualised deepfakes, which constitute a form of gender-based violence (Holliday, Reference Holliday2021). Given these challenges, the regulation of AI through a gender-sensitive approach is imperative.
This paper critically examines the AI Act (European Commission, 2024) through a feminist lens, employing feminist legal methods to analyse its interaction with EU law and assess its implications for gender inclusivity, non-discrimination and accountability. The AI Act, intended to mitigate risks associated with high-risk AI systems, includes provisions allowing for the processing of “special categories of personal data” under specific conditions to prevent algorithmic discrimination (Scientific Research Committee of the European Parliament, 2025). However, gender is not classified as a “special category” under Article 9 (1) General Data Protection Regulation (GDPR) (European Commission, 2016), creating legal ambiguities that may impede efforts to address systemic gender inequalities.
The absence of explicit references inter alia to gender equality in early drafts of the AI Act – largely due to opposition from Member States such as Poland – underscores broader gaps in AI governance (Stolton, Reference Stolton2020). A gender-responsive text analysis reveals that while the AI Act references “non-discrimination” in multiple provisions, explicit references to “gender equality” appear only twice in the Recitals (Recitals 27 and 48 AI Act) and once in Article 95(2)(e) AI Act, which concerns codes of conduct for voluntary application. Moreover, while the AI Act acknowledges gender-based discrimination, it fails to account for inclusive gender identities such as transgender, non-binary, intersex and gender non-conforming people, highlighting the need for more inclusive regulatory frameworks.
A de lege lata interpretation of the AI Act recognises that the AI Act incorporates human rights protections, yet its approach remains largely formalistic, treating algorithmic bias as a technical rather than structural issue (Frischhut, Reference Frischhut2022; Tzimas, Reference Tzimas2021). A de lege ferenda analysis, however, reveals deeper limitations in addressing gendered power imbalances within AI governance (Stierle, Reference Stierle2021). Drawing from Miranda Fricker’s theory of hermeneutical injustice (MacKinnon, Reference MacKinnon2013), MacKinnon’s dominance theory (Fricker, Reference Fricker2007) and intersectionality as developed by Ann Julia Cooper (Cooper, Reference Cooper1988) and Kimberlé Crenshaw (Crenshaw, Reference Crenshaw1991), this paper argues that AI regulation assumes legal neutrality while overlooking systemic gendered and racialised biases. Hermeneutical injustice is particularly relevant in AI governance, as marginalised groups often lack the epistemic resources to contest discriminatory AI systems (Rafanelli, Reference Rafanelli2022). MacKinnon’s critique of formal equality further illustrates how algorithmic decision-making reflects male-dominated norms embedded in legal and data structures (Bird‐Pollan, Reference Bird‐Pollan2020). MacKinnon’s dominance theory is particularly relevant, as algorithmic decision-making frequently reflects male-dominated norms due to biased training data and the legal system’s implicit androcentrism (Doh, Canali & Karagianni, Reference Doh, Canali and Karagianni2024). Rather than treating gender bias as an incidental flaw, a substantive equality approach necessitates active restructuring of AI policies to dismantle male dominance in data collection, model training and regulatory oversight. Without such structural interventions, the AI Act risks reinforcing, rather than mitigating, existing inequalities by treating AI bias as a technical issue rather than a deeply entrenched social and legal challenge.
Yet, hermeneutical approaches from decolonial scholars argue that law should be interpreted with historical consciousness, acknowledging colonial violence, racial capitalism and epistemic injustice. Decolonial theorists like Aníbal Quijano (Quijano, Reference Quijano2000) and Walter Mignolo (Mignolo, Reference Mignolo2012) argue that modern law is deeply rooted in coloniality – the ongoing dominance of Western epistemologies, legal structures and institutions (de Sousa Santos, Reference de Sousa Santos2024). Legal hermeneutics within colonial and postcolonial contexts often interprets laws through Eurocentric frameworks, marginalising Indigenous, African and non-Western jurisprudence. AI systems have been shown to disproportionately misclassify racialised and gender-diverse individuals, reinforcing structural inequalities. A decolonial feminist approach to AI law demands that regulatory frameworks centre marginalised identities rather than treating them as afterthoughts (Ricaurte & Zasso, Reference Ricaurte, Zasso and Cebral-Loureda2023).
The European Parliament’s LIBE and FEMM Committees played a pivotal role in shaping amendments to the AI Act, particularly advocating for transparency, privacy protection and anti-discrimination measures (Scientific Research Committee-European Parliament, 2024). Their contributions emphasised the necessity for AI systems to be interpretable, especially in high-risk domains such as law enforcement and social services, to ensure compliance with the GDPR and safeguard fundamental rights. A primary focus of these amendments was mitigating biases in AI algorithms, particularly those that could lead to discrimination based on gender, race or other protected characteristics in hiring, credit scoring and surveillance technologies. The European Parliament introduced additional requirements for high-risk AI systems, including provisions for human oversight, transparency, non-discrimination and social responsibility (Scientific Research Committee-European Parliament, 2024). However, these amendments fell short of addressing the deeper social construction of gender, raising concerns about the effectiveness of gender equality measures in AI governance.
This paper critically examines the AI Act across three key stages: (1) pre-market regulation during the design phase for product manufactures and providers, (2) the responsibilities of AI providers and (3) the post-release obligations of deployers, aligning with the categorisation of responsibilities between product manufacturers,Footnote 1 deployersFootnote 2 and providers.Footnote 3 While the AI Act requires bias testing (Article 10 (2) (g) AI Act) and human oversight (Article 14 (2) AI Act), it does not mandate intersectional gender audits (Article 17 AI Act) or diverse AI development teams (Recital 165 AI Act). In the post-release phase, providers must monitor AI performance (Article 72 AI Act), but the AI Act relies heavily on self-regulation, lacking strong redress mechanisms for algorithmic discrimination. Without robust structural interventions, the AI Act risks reinforcing – rather than mitigating – existing gendered inequalities in AI governance. This paper argues for feminist legal interventions that emphasise intersectionality, accountability and the dismantling of structural biases in AI regulation.
2. A feminist reading of the AI act as a pre-market regulation
In the pre-market design phase, under Article 6 AI Act, high-risk obligations apply to AI systems classified as a “safety component” under Annex I, Section A, or as a “high-risk AI system” under Annex III. Developers of such systems must adhere to a series of regulatory requirements to ensure compliance. These obligations include establishing and implementing risk management processes (Article 9 AI Act) and using high-quality training, validation and testing data (Article 10 AI Act). Additionally, systems must maintain transparency and provide user information (Article 13 AI Act) and integrate human oversight measures (Article 14 AI Act). Further, developers must establish a quality management system (Article 17 AI Act).
In case of General Purpose AI (GPAI) models, they are subject to specific obligations under Article 53 AI Act. Developers must create and maintain technical documentation and provide it to the AI Office upon request. Additionally, they must ensure that providers integrating AI models have access to necessary documentation while balancing transparency with intellectual property protection. Furthermore, developers must publish a publicly available summary of the AI model’s training data using a standardised template provided by the AI Office. If a GPAI model functions as, or is integrated into, a high-risk AI system, additional obligations under Recital 85 may apply, either directly or indirectly. However, a key responsibility is to avoid exploiting vulnerabilities which constitutes a prohibited practice (Article 5 AI Act). A feminist perspective on the definition of “vulnerabilities” and the interpretation of this provision is essential.
2.1. Vulnerability in prohibited practices: insights from feminism
Article 5 of the AI Act outlines AI practices that pose unacceptable risks to safety and fundamental rights, identifying specific AI systems and techniques that are banned outright due to their potential to cause harm. These include social scoring AI systems (Article 5(1)(c)), which evaluate or categorise individuals based on their social behaviour or personal characteristics, as well as manipulative techniques (Articles 5(1)(a) and (b)) that exploit individual vulnerabilities, such as those affecting children, to manipulate behaviour (Longo, Reference Longo2023). The protection of vulnerability is further enshrined in Recital 110 of the AI Act. However, what is considered as “vulnerable” under this provision?
The concept of vulnerability originates from Computer Science, where it refers to security flaws, glitches or weaknesses in software code that can be exploited by attackers (Praveen Kumar, Reference Praveen Kumar2022). In contrast, in the Social Sciences and Gender Studies, vulnerability is framed within the context of self-determination, as it diminishes an individual’s capacity and implies a reliance on others for support (Longo, Reference Longo2023). For example, the portrayal of Indigenous women as naked frequently intersects with narratives of vulnerability, reinforcing harmful stereotypes and colonial power dynamics. Such depictions have historically been used to strip Indigenous women of their dignity, autonomy and humanity, framing them as inherently vulnerable and sexualised beings (Levine, Reference Levine2008). This intersection of nudity and vulnerability is deeply rooted in colonialism, exoticism and patriarchy, shaping contemporary perceptions and treatment of Indigenous women. These critical concerns extend to the design and deployment of AI technologies, raising important questions about their impact on marginalised communities.
With the proliferation of AI technologies designed by global tech monopolies and deployed worldwide, the concept of decolonisation in AI governance has become increasingly relevant. Decolonisation entails a critical, evidence-informed appraisal of colonial histories and their entanglements with the present, particularly in relation to power and gender imbalances. Addressing these historical legacies is crucial for exposing oppressive AI systems and advancing intersectional approaches to AI ethics and governance (Rachel, Reference Rachel2021; Shakir, Reference Shakir2020; Siapera, Reference Siapera2022).
Oppression, broadly defined as the exercise of power in a burdensome, cruel or unjust manner, has historically been used to subordinate marginalised groups, including women and gender minorities. AI systems are embedded in historical and systemic forms of oppression, often reflecting biases present in their training data. In Data Feminism, Catherine D’Ignazio and Lauren F. Klein highlight the ways in which power structures influence data collection, presentation and interpretation, thereby shaping AI-driven decision-making (D’Ignazio, Reference D’Ignazio2020). This analysis aligns with Patricia Hill Collins’ matrix of domination, a framework that examines interlocking systems of oppression – white supremacy, patriarchy, capitalism and settler colonialism – across structural, disciplinary, hegemonic and interpersonal domains (Costanza-Chock, Reference Costanza-Chock2020; Hill Collins, Reference Hill Collins2009). Within this context, intersectionality serves as a methodological approach that helps uncover the multifaceted layers of discrimination embedded in AI systems. As many AI systems reflect racial, gender and socioeconomic biases due to flawed data collection practices, Feminist Data Set by Caroline Sinders seeks to counteract these biases by ensuring diverse and representative data sources that prioritise fairness, intersectionality and accountability (Sinders, Reference Sinders2020).
In this regard, an analysis of Article 5(1)(e-h) AI Act reveals the need for a feminist perspective, as this provision prohibits biometric categorisation systems that rely on sensitive characteristics, such as political and religious beliefs, race or sexual orientation, in public spaces. While these provisions emphasise the importance of privacy, dignity and non-discrimination, they notably exclude gender from the list of protected characteristics, raising concerns about how gender will be safeguarded in AI-driven assessments of social behaviour and gender performativity (Butler, Reference Butler1990, Reference Butler2024). Similarly, under Articles 4(14) and 9 of the GDPR (European Parliament & Council of the European Union, 2016), “biometric data” are defined as personal data derived from technical processing of physical, physiological or behavioural characteristics to uniquely identify an individual. However, gender is not explicitly included in these provisions, meaning it is not recognised as a special category of protected data. In their Joint Opinion 5/2021 (European Data Protection Supervisor (EDPS) & European Data Protection Board (EDPB), 2021; Malgieri & Fuster, Reference Malgieri and Fuster2022), the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) recommended a ban on AI biometric categorisation systems that classify individuals based on gender. Nevertheless, despite these recommendations, this provision does not appear to have been incorporated into the AI Act, which was formally adopted in 2024.
To conclude, vulnerability under the AI Act refers to the potential risks and harms that individuals or groups may face due to the deployment and use of AI systems, particularly those involving sensitive data or high-risk applications. The concept of vulnerability in AI encompasses the various ways in which individuals, groups, systems or even entire societies can be at risk due to the development, deployment or use of AI technologies. Current literature highlights the limited conceptualisation of vulnerability within the Act and calls for a more comprehensive approach that addresses the vulnerabilities of all stakeholders in AI, including developers and organisations (Galli & Novelli, Reference Galli and Novelli2024). Issues such as gender stereotyping embedded in AI systems (Doh et al., Reference Doh, Canali and Karagianni2024) and the lack of regulation surrounding the harmful development and deployment of generative AI technologies – such as the creation of non-consensual sexualised deepfakes (Karagianni, Reference Karagianni2025) – should be properly addressed within the AI Act. While Article 5(b) of the EU Gender-Based Violence (GBV) Directive (European Parliament and Council, 2024) criminalises the production, manipulation and dissemination of non-consensual intimate or altered material, it is essential that this gendered harm be acknowledged within the AI Act to ensure that accountability provisions are explicitly included.
3. The responsibilities of AI providers
The AI Act introduces distinct but interconnected processes to regulate AI systems in the European Union. The harmonisation process (Articles 1 (2), 4 and 52 AI Act) aims to create a unified regulatory framework, ensuring cooperation between national authorities and consistency across EU member states by setting common rules, definitions and risk classifications for AI systems, thus preventing fragmentation in national regulations. Harmonisation process ensures a unified regulatory approach across the European Union. On the other hand, conformity assessment verifies compliance with the AI Act’s legal and technical requirements (Articles 6, 16, 30 and 43 AI Act). The conformity assessment process ensures that AI systems, particularly those deemed high-risk, meet the AI Act’s requirements, either through internal self-assessment (Article 20 AI Act) or third-party evaluation by a Notified Body (Article 43 AI Act). Successful completion of this assessment may result in the issuance of a CE marking, enabling the AI system’s market entry within the EU. To this extend, the standardisation process (Articles 40 and 41 AI Act) involves the development of technical standards by recognised European bodies, providing guidelines for AI design, risk management and performance, which, although not legally binding, facilitate conformity assessments. Together, these processes ensure regulatory coherence, technical compliance and legal market access for AI systems across the EU. In this section, the harmonisation requirement, the conformity assessment and the standardisation process will be examined to address gender stereotyping and male dominance in AI, ensure compliance with gender equality and non-discrimination in standards and promote accountability for gendered harms.
3.1. Exploring feminist perspectives on EU harmonisation requirement
The AI Act aims to create a harmonised legal framework across EU Member States, ensuring consistency in AI regulations (Articles 1 (2), 4 and 52 AI Act). This involves setting uniform rules, definitions and risk-based classifications for AI systems to prevent fragmentation across different national regulations. It ensures that AI providers and deployers operate under the same legal requirements regardless of the country within the EU. Article 6 of the AI Act addresses the requirements for high-risk AI systems to ensure they comply with specific standards for safety, transparency and accountability. By establishing common standards for high-risk AI systems, Article 6 of the AI Act aims to harmonise regulations across EU Member States, promoting a unified approach to AI governance while protecting fundamental rights. Harmonisation, in this context, refers to aligning national laws with EU law to create a single legal framework across the European Union (Klamert, Reference Klamert2015). Significant pieces of EU legislation, such as the GDPR (European Commission, 2016), the Digital Services Act (European Parliament & Council of the European Union, 2022a), the Digital Markets Act (European Parliament & Council of the European Union, 2022b) and the EU Equality and Non-Discrimination Law (European Union, 2000), aim to regulate data use, online platforms, competition and human rights protection. Through these laws, Article 6 AI Act seeks to ensure consistency, interoperability and fairness. When considering this legislation from feminist perspectives, it becomes crucial to examine how these laws impact the inclusion of a gender equality and non-discrimination angle within the AI Act.
A feminist perspective on Article 6 AI Act provides critical insight into the potential and pitfalls of EU harmonisation efforts. Feminist scholars and activists often evaluate how such legislation addresses issues of gender equality, inclusivity and intersectionality (Anagnostou & Millns, Reference Anagnostou and Millns2013). Women, particularly in marginalised communities, face specific risks related to privacy breaches and surveillance, as in cases of domestic violence where their data can be weaponised. Feminist scholars argue that the GDPR needs stronger protections that explicitly account for these vulnerabilities (Malgieri & Fuster, Reference Malgieri and Fuster2022) – as was explained above – particularly around location tracking, personal data exposure and consent (Sovacool, Furszyfer-Del Rio & Martiskainen, Reference Sovacool, Furszyfer-Del Rio and Martiskainen2021). They also emphasise that data protection laws, such as the GDPR, should be interpreted with a focus on bodily autonomy, allowing individuals – particularly women – to have more control over their personal data, especially in cases of image-based sexual abuse (Rigotti & McGlynn, Reference Rigotti and McGlynn2022) and the generation of non-consensual sexualised deepfakes (Karagianni & Doh, Reference Karagianni and Doh2024).
Regarding the harmonisation of the AI Act with the EU Equality and Non-Discrimination Law,Footnote 4 the key feminist concept of intersectionality deserves special attention. The concept of intersectionality, first introduced by Ann Julia Cooper in 1892 (Cooper, Reference Cooper1988) and later popularised by American scholar Professor Kimberlé Crenshaw (Crenshaw, Reference Crenshaw1991), examines how different aspects of a person’s identity intersect. Crenshaw used the concept to describe how Black women experience discrimination differently from both white women (who face sexism but not racism) and Black men (who face racism but not sexism). She highlighted how traditional feminist and anti-racist movements had often failed to fully account for the unique challenges faced by Black women, who experience both sexism and racism simultaneously in ways distinct from those faced by men of colour or white women. Intersectionality refers to how different forms of discrimination (e.g., gender, race, class) intersect and compound one another. Therefore, EU harmonisation efforts must not only focus on gender equality in isolation but should also address how different forms of marginalisation interact (Xenidis, Reference Xenidis2018). For instance, a woman of colour may face discrimination that is both gendered and racialised in online spaces, requiring specific protections. From a feminist perspective, harmonisation should not lead to homogenisation – a “one-size-fits-all” approach that fails to account for the diverse social, economic and cultural contexts of EU Member States. Instead, harmonisation should actively prioritise gender equality, inclusivity and the protection of marginalised groups, extending beyond the goal of merely ensuring market efficiency and consistency. This requires embedding feminist principles, such as intersectionality, inclusivity and accountability, into the development, implementation and monitoring of EU laws.
In this regard of harmonising the AI Act with EU Equality and Non-Discrimination Law,Footnote 5 the contributions of González Sabzée (Salzberg, Reference Salzberg2019) and Gyan (Guyan, Reference Guyan2022) are particularly significant. More specifically, González Sabzée critiques the limitations of human rights law in fully addressing the complexities of sexual and gender identities. The existing framework often operates under binary categories of male/female and heterosexual/homosexual, which can marginalise individuals who do not conform to these categories. González Sabzée argues that while human rights law is evolving, it still tends to reinforce these binary and normative understandings, thus overlooking the lived experiences of queer and trans individuals (Salzberg, Reference Salzberg2019). González Sabzée explores the process of “queering” human rights law, which involves examining how legal norms around human rights can be expanded and challenged through queer theory. Queer theory critiques traditional norms related to gender, sexuality and identity, which are often heteronormative (focused on heterosexuality as the norm) and cisnormative (focused on cisgender identities). In this context, “queering” human rights law means questioning and altering the existing legal frameworks to be more inclusive of diverse and non-binary gender identities, as well as sexual orientations (Salzberg, Reference Salzberg2019). To this extent, Guar also emphasises the importance of critically analysing how data about gender, sex and sexuality are gathered and used (Guyan, Reference Guyan2022). He argues that the collection of such data is often limited and standardised, reflecting heteronormative and cisnormative assumptions that overlook the complexities of queer identities (Guyan, Reference Guyan2022). By queering data, Guar seeks to challenge these conventional frameworks and encourage more inclusive data practices.
These contributions clearly demonstrate that gender equality should not be understood as solely concerning women in isolation or reinforcing exclusionary gender norms. Instead, a feminist analysis seeks to dismantle patriarchal structures that affect people of all genders, including men, non-binary and gender non-conforming people. Various feminist perspectives, such as intersectional feminism and queer feminism, emphasise that gender justice is deeply interconnected with factors such as race, class, sexuality and disability (Ahmed, Reference Ahmed1996; Delmar, Reference Delmar2018; Lewis, Reference Lewis2025). These perspectives highlight how systems of power shape experiences differently across social groups and advocate for a more inclusive approach that extends beyond women’s issues alone.
3.2. Rethinking conformity assessments in the context of gender equality
Under the AI Act, high-risk AI systems (Article 6 AI Act) require a conformity assessment to ensure compliance with safety and ethical standards, involving the review of documentation, risk management processes (Article 9 AI Act), data governance (Article 10 and Annex VII AI Act) and technical measures (Articles 9 and 13 and Annex IV AI Act). In some cases, the assessment must be carried out by a Notified Body (Article 43 AI Act), an independent third-party organisation designated by EU Member States. For low-risk AI systems, a self-assessment suffices (Article 20 AI Act), where providers verify that the system meets basic requirements such as data quality and transparency. The conformity assessment process is described in Chapter III AI Act and involves several steps: risk management, where providers demonstrate a risk assessment; documentation, including technical records of compliance with the AI Act; testing, to evaluate adherence to required standards; and audit trails, ensuring accountability and human oversight. If successful, the AI system receives a CE marking – a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III (Article 3 point 24 AI Act), signifying compliance with EU regulations, and can be marketed within the EU. For high-risk systems, the provider must also make the assessment and documentation available to relevant authorities.
Article 6 AI Act focuses on ensuring safety, transparency and accountability in AI technologies, while Article 43 AI Act creates a centralised body to facilitate cooperation and guidance among EU Member States. This dual approach aims to promote responsible AI development and use while protecting the rights and interests of individuals. Annex III, point 4 (a) of the AI Act mandates that AI systems used for targeted job advertisements, application filtering and candidate evaluation undergo a conformity assessment. This requirement arises from the well-documented risks of algorithmic bias, particularly when AI models are trained on historically biased datasets, which can systematically disadvantage women and marginalised groups. For instance, automated resume filtering systems may penalise career gaps, disproportionately affecting women, particularly those who have taken maternity leave (Ajunwa, Reference Ajunwa2019). Similarly, AI-driven video interviews, which analyse facial expressions, speech patterns or other biometric data, may introduce bias against candidates with disabilities, accents or non-Western communication styles, further exacerbating barriers to employment (Biswas et al., Reference Biswas, Jung, Unnam, Yadav, Gupta and Gadiraju2024).
As was explained above, conformity assessment refers to the evaluation of AI systems to determine whether they meet specific standards, regulations and ethical guidelines before deployment or use (O’Connor & Liu, Reference O’Connor and Liu2024). These assessments typically aim to ensure that AI systems are safe, reliable, fair and free from bias. However, from a feminist perspective, issues such as gender bias, inclusivity and power structures must be critically examined, particularly in light of systemic inequalities and the risk of reinforcing these disparities if assessments are not designed through an intersectional lens. One of the most pressing feminist concerns regarding AI is its potential to perpetuate gender biases through biased algorithms and training data. AI systems are often trained on historical datasets that reflect existing social inequalities (Keyes, Reference Keyes2018). Consequently, conformity assessments should mandate comprehensive bias audits that extend beyond detecting overt discrimination to uncover subtle, structural biases that disproportionately affect women – particularly women of colour, LGBTQIA+ people and other marginalised groups (Kobayashi & Nakao, Reference Kobayashi and Nakao2020; Magee, Ghahremanlou, Soldatic & Robertson, Reference Magee, Ghahremanlou, Soldatic and Robertson2021). For example, AI systems used in recruitment or healthcare may disadvantage women by underrepresenting their experiences or needs in training datasets. In recruitment, AI-driven hiring tools may favour male candidates due to past hiring patterns (Di Stasio & Larsen, Reference Di Stasio and Larsen2020), while in healthcare, AI models trained primarily on male-centric data may fail to adequately diagnose or treat conditions that disproportionately affect women (Tan & Benos, Reference Tan and Benos2025). These issues underscore the necessity of rigorous, intersectional conformity assessments (Di Stasio & Larsen, Reference Di Stasio and Larsen2020; Tan & Benos, Reference Tan and Benos2025) to ensure that AI systems do not reinforce existing inequalities but instead promote fair and equitable outcomes.
In this context, intersectional data analysis in AI conformity assessments is essential. Gender bias should not be examined in isolation; rather, assessments must account for the ways in which gender intersects with race, class, disability and other identity factors, ensuring that AI systems do not disproportionately harm marginalised communities (Nativi & Nigris, Reference Nativi and Nigris2021). To achieve this, conformity assessments should be conducted by diverse teams that reflect a range of genders, races and social backgrounds. A more inclusive group of auditors enhances multi-perspective evaluation, reducing the risk of bias and increasing the fairness and accountability of AI systems (Henriksen, Enni & Bechmann, Reference Henriksen, Enni and Bechmann2021). Moreover, these assessments should incorporate input from the communities most affected by AI technologies, including women of colour, women with disabilities and other underrepresented groups. Their lived experiences and expertise can help identify hidden biases that might otherwise go unnoticed. Furthermore, there is a pressing need for independent oversight bodies to ensure that AI conformity assessments are free from conflicts of interest and genuinely committed to addressing bias and harm (Calvi & Kotzinos, Reference Calvi and Kotzinos2023). These bodies should be empowered to enforce compliance, set clear accountability standards and hold AI developers responsible for violations. By implementing robust, intersectional and community-informed oversight, AI conformity assessments can contribute to the development of fairer and more equitable AI systems that actively challenge – rather than reinforce – existing structural inequalities.
3.3. Feminist standards in AI standardisation process
Standardisation in the context of the AI Act (Articles 40 and 41 AI Act) refers to the creation of technical standards that AI systems must adhere to in order to meet regulatory requirements. These standards are typically developed by recognised European standardisation bodies, such as the European Committee for Standardisation (CEN),Footnote 6 the European Committee for Electrotechnical Standardisation (CENELEC)Footnote 7 and ETSI,Footnote 8 often with input from the European Commission. They provide detailed guidelines for AI system design, risk management and performance metrics. Although these standards are not legally binding, compliance with them facilitates the conformity assessment process and aids companies in demonstrating their adherence to regulatory obligations, thus simplifying market access and ensuring alignment with the AI Act’s provisions.
The AI standardisation process involves the creation of guidelines, technical specifications and best practices to ensure that AI systems are safe, reliable, transparent and ethically designed. Standardisation plays a crucial role in fostering innovation while minimising risks related to bias and discrimination as it is described in Article 40 and Recital 121 AI Act. Recital 121 AI Act highlights the role of standardisation in ensuring that high-risk AI systems comply with the requirements set out by the regulation. It emphasises the importance of creating harmonised standardsFootnote 9 across the European Union to promote consistent application of the AI Act’s provisions. Various bodies like the European Standards Organisations, National Standards Bodies, European Stakeholder Organisations, Harmonised Standards Consultants and the European Commission are working on creating frameworks to regulate AI (Klumbyte, Reference Klumbyte2023). Among the standards having been set are human-centred AI, security and privacy, transparency, data governance, explainability and accountability.
From a feminist perspective, standardisation processes in AI can be viewed as a tool to ensure that AI systems do not inadvertently reinforce stereotypes or discriminatory practices, particularly against women and marginalised groups (Lütz, Reference Lütz2023). Feminists would argue that the development of technical standards by bodies such as CEN, CENELEC and ETSI should actively address issues like gender bias in AI algorithms, data collection and risk management practices. Furthermore, conformity assessments, based on the harmonised standards, should ensure that AI systems undergo rigorous scrutiny for fairness, inclusivity and accountability, particularly regarding the disproportionate impact that AI can have on women and other marginalised communities. By incorporating feminist principles into the standardisation process, like intersectionality, inclusivity, equity and participation, the AI Act could contribute to creating a more equitable AI ecosystem that actively works against systemic inequalities.
Feminist principles in AI standardisation emphasise the importance of diverse representation in the development and design of AI systems, ensuring that datasets reflect a wide range of populations, including women, LGBTQIA+ people and underrepresented communities, to prevent biases that perpetuate inequalities (Balahur et al., Reference Balahur, Jenet, Hupont, Charisi, Ganesh, Griesinger and Tolan2022). Feminist and anti-colonial theories offer a valuable framework for examining patriarchal dynamics, focusing on the power relations involved in data practices and advocating for self-determination and collective empowerment (Huang L, Reference Huang L2022). Feminist and anti-colonial theories offer a valuable framework for examining patriarchal dynamics, focusing on the power relations involved in data practices and advocating for self-determination and collective empowerment (Varon, Reference Varon2021).
Additionally, feminist approaches stress the need for bias mitigation and equity, requiring AI systems to undergo rigorous testing for gender, racial and socioeconomic biases in algorithms, data sets and outcomes (Szczekocka, Tarnec & Pieczerak, Reference Szczekocka, Tarnec and Pieczerak2022). In this context, the standardisation process should also promote accountability and transparency, ensuring clear documentation of how AI models are developed, data are collected and decisions are made, which allows for scrutiny and prevents unjust practices (Schwartz et al., Reference Schwartz, Schwartz, Vassilev, Greene, Perine, Burt and Hall2022). Furthermore, intersectional feminist theory (Cooper, Reference Cooper1988; Crenshaw, Reference Crenshaw1991), particularly through the lens of intersectionality, highlights the importance of considering how multiple, overlapping identities, such as race, class, sexuality and disability, shape how AI systems affect individuals, advocating for standards that avoid exacerbating intersectional inequalities, such as differential treatment of women of colour compared to white women.
Feminist Science, Technology, and Society (STS) scholarship has been pivotal in shaping feminist AI ethics, particularly around the concept of accountability. From Donna Haraway’s metaphor of the cat’s cradle (Haraway, Reference Haraway2014) – emphasising the interconnectedness and situatedness of knowledge production – to Karen Barad’s theory of agential realism (Murris, Reference Murris2022), feminist scholars have advocated for a critical examination of socio-technical systems (Drage, Reference Drage2024). Accountability in this context refers to clearly defining the roles and responsibilities of every actor within the AI value chain and establishing mechanisms for control and oversight (Megarry, Reference Megarry2020). An accountable AI system requires clear identification of who is responsible in the event of a flawed design or malfunction.
To conclude, inclusive standardisation, which involves diverse stakeholders such as women of colour and other marginalised groups in the creation of AI guidelines, is crucial to ensuring that the perspectives of those most affected by AI technologies are taken into account. Gender-aware standards are also essential, as they should include specific measures to address gender bias and inequality, such as mandatory bias audits, the use of diverse datasets and ensuring that algorithms are explainable to impacted communities. However, there is a risk of tokenism (Yoder, Reference Yoder1991) – where the involved stakeholders may make only superficial efforts to appear inclusive, without making meaningful changes to achieve gender equity. This is connected to the fact that AI standardisation process remains largely dominated by powerful corporations and governments, which can hinder feminist efforts to promote true inclusivity and equity in the development and regulation of AI systems.
4. A feminist reading of the AI act – the post-release obligations of deployers
The AI Act outlines post-market oversight and corrective actions in the event that AI systems cause harm or fail to comply with regulatory standards. Deployers of a high-risk AI system must adhere to the obligations set forth in Article 26 AI Act, which require taking appropriate technical and organisational measures to ensure the system is used in accordance with the provided instructions, including monitoring the system’s operation based on these instructions, informing the providers when necessary and cooperating with relevant national authorities regarding any actions they take in relation to the system to implement the AI Act. Additionally, they are responsible for regularly monitoring and updating robustness and cybersecurity measures, ensuring that input data are relevant and representative of the system’s intended purpose. In cases where the system influences decision-making related to people – like hiring or education, they must inform individuals that they are subject to the system, explain its purpose and decision-making process and inform them of their right to an explanation (Hadfield & Clark, Reference Hadfield and Clark2023). If the system’s use could harm health, safety or rights, they must immediately notify the provider, distributor and relevant authorities, suspend the system’s use and interrupt it if necessary.
A feminist interpretation of these obligations would advocate for a monitoring framework that is particularly attuned to the ways in which AI systems may disproportionately impact women, LGBTQIA+ people and marginalised communities, particularly in contexts such as hiring algorithms, healthcare diagnostics or law enforcement. The SyRI (Systematic Risk Identification Method) case in the Netherlands offers a significant lens through which to examine the intersectionality of technology, law and social justice (van Bekkum & Borgesius, Reference van Bekkum and Borgesius2021). SyRI was an AI-driven system used by the Dutch government to predict and identify individuals at risk of committing fraud in social welfare programs. The case, however, became controversial and raised concerns about racial profiling, social exclusion and discrimination, particularly affecting marginalised communities. A feminist and intersectional analysis of this case reveals the ways in which technology-specifically predictive algorithms can perpetuate inequalities based on multiple axes of identity, such as race, class and gender (Bekker, Reference Bekker, Spijkers, Werner and Wessel2021). The system’s reliance on historical data meant that individuals from racially marginalised communities, particularly those of Moroccan or Turkish descent, were disproportionately affected by its predictions (van Bekkum & Borgesius, Reference van Bekkum and Borgesius2021). This raised concerns about racial profiling, where the system reinforced stereotypes and disproportionately targeted people based on their racial or ethnic background. In an intersectional sense, this bias was compounded by socioeconomic factors, as people in lower income brackets, who are often racial minorities, were more likely to be flagged by SyRI for welfare fraud investigations (Bekker, Reference Bekker, Spijkers, Werner and Wessel2021).
To this extent, a feminist reading emphasises the necessity of conducting impact assessments that examine how an AI system may affect different genders and whether it addresses the needs of marginalised groups. Under Article 26 of the AI Act, deployers are required to perform risk assessments both prior to and following deployment to ensure that AI systems do not cause harm. A feminist interpretation of this obligation calls for these risk assessments to be explicitly gender-aware. Deployers should assess whether their AI systems contribute to or exacerbate gender inequalities, ensuring that these systems are designed to promote gender justice. This includes addressing issues such as gender-based violence in generative AI technology (Karagianni, Reference Karagianni2025), ensuring fairness in recruitment processes and preventing discriminatory practices in healthcare.
4.1. A gender-impact assessment under the fundamental rights impact assessment
Among the solutions proposed in the AI Act to safeguard fundamental rights at risk from high-risk AI systems are a Risk Management System (RMS), outlined in Article 9, and a Fundamental Rights Impact Assessment (FRIA), as stipulated in Article 27(1). While the introduction of the RMS and FRIA for high-risk AI systems represents a novel regulatory approach, the concepts of risk management and impact assessment are well-established in technology regulation. These mechanisms have historically emerged in response to the uncertainties associated with technological advancement across multiple fields (Demetzou, Reference Demetzou and Kosta2019a).
In essence, risk management is concerned with identifying and addressing risks, understood as potential negative events (Macenaite, Reference Macenaite2017). By contrast, impact assessments evaluate both the positive and negative consequences of an initiative on societal concerns, such as fundamental rights, though they do not necessarily prescribe measures for addressing identified risks (Demetzou, Reference Demetzou2019b; Macenaite, Reference Macenaite2017). However, questions persist regarding their operationalisation in the AI context. For instance, what constitutes a risk to a fundamental right – such as gender equality and non-discrimination – remains open to multiple conceptualisations (Baldwin & Black, Reference Baldwin and Black2016; Golpayegani, Pandit & Lewis, Reference Golpayegani, Pandit and Lewis2023; Van Dijk, Gellert & Rommetveit, Reference Van Dijk, Gellert and Rommetveit2016). Similarly, measuring such risks is inherently subjective (Luhmann, Reference Luhmann1991; Slupska, Reference Slupska2019) and can be influenced by gendered assumptions (European Institute for Gender Equality (EIGE), 2017; Stachowitsch & Sachseder, Reference Stachowitsch and Sachseder2019). Yet, it is essential to consider the limitations of these approaches in achieving gender equality, particularly given the AI Act’s normative framework, which reflects a narrow understanding of gender and prioritises product safety over broader fundamental rights protections (Veale & Zuiderveen Borgesius, Reference Veale and Zuiderveen Borgesius2021).
While the AI Act introduces important safeguards, it risks falling short in addressing gender-related harms due to its narrow focus on product safety and lack of a robust gender perspective. Embedding Gender Impact Assessment (GIA) into AI governance can help bridge this gap by ensuring AI systems are designed, assessed and deployed in a way that promotes gender equality (Karagianni & Calvi, Reference Karagianni and Calvi2025). To effectively address gender-related biases in AI systems, a comprehensive approach is required. First, it is essential to detect and mitigate algorithmic biases that disproportionately disadvantage women and marginalised gender groups. This entails scrutinising training datasets, refining model architectures and implementing bias detection techniques to prevent the reinforcement of historical inequalities. Second, AI systems should be developed through a gender-equitable design process that actively integrates feminist and intersectional perspectives. This involves embedding gender-sensitive methodologies in AI development, ensuring that systems account for diverse experiences and do not perpetuate discriminatory outcomes. Third, participatory governance mechanisms should be established to enhance transparency and accountability in AI policy decisions (Taylor, Floridi & van der Sloot, Reference Taylor, Floridi and van der Sloot2017). In particular, the involvement of gender experts and affected communities is crucial to ensuring that AI governance frameworks reflect lived experiences and address structural inequalities. By incorporating these measures, AI regulation can move towards a more inclusive and equitable technological landscape. However, for GIA to be effective, it must be legally mandated, supported by comprehensive gender-disaggregated data and applied intersectionally.
4.2. Feminist critiques of GPAI systems
GPAI systems, such as large language models, foundation models and generative AI, are increasingly shaping decision-making in critical domains, from hiring and healthcare to law enforcement. However, feminist scholars critique these systems for reinforcing gendered, racialised and class-based inequalities while failing to ensure accountability for their societal harms. Article 51 AI Act enshrines obligations for deployers of GPAI – whether corporations, governments or institutions – to mitigate harm and promote justice-oriented AI governance.
GPAI systems learn from vast datasets that often reflect historical inequalities (Shrestha & Das, Reference Shrestha and Das2022). For instance, language models trained on internet data may internalise sexist stereotypes, like Google Translator – which is a narrow/specialised AI system – which translates the word “nurse” in English into Greek with a female pronoun, while the word “lawyer” or “engineer” with a male, replicating in this way the gender stereotypes. To this extent, an inclusive language in their code/programming languageFootnote 10 should take into account gender and demographic characteristics, which is highly needed in Natural Language Processes (Bozkurt, Reference Bozkurt2023; Foulidi, Reference Foulidi2019; Weatherall, Reference Weatherall2002).
In a complementary manner, Article 95 of the AI Act focuses on voluntary codes of conduct, urging the AI community to go beyond the regulatory requirements by adopting self-regulatory ethical standards. This article invites key stakeholders, including industry associations and organisations, to create voluntary codes of conduct that foster high standards of trustworthiness, transparency, fairness and respect for fundamental rights in AI systems (European Union, 2025). Although these codes are not legally binding, they offer a framework for establishing aspirational norms for AI developers and deployers, promoting ethical and responsible AI development practices.
Additionally, Recital 4 AI Act emphasises the necessity of a human-centric approach to AI development. It asserts that AI should be designed to enhance human capabilities and societal well-being. The recital underscores that diverse perspectives are essential for achieving this goal, as they contribute to the creation of AI systems that are more inclusive and considerate of the varied needs and experiences of individuals across different demographics. In these terms, Recital 27 AI Act highlights the importance of incorporating diverse datasets in the training of AI systems to mitigate the risk of bias. By ensuring that AI technologies are developed using data that reflect a wide range of experiences and perspectives, the recital advocates for a more equitable outcome in the applications of AI. This commitment to diversity is seen as essential for fostering fairness and avoiding discriminatory practices. Both Recitals 80 and 165 AI Act advocate for AI systems to respect human dignity and promote individual autonomy and equality. By fostering a human-centric approach that prioritises inclusivity, these recitals aim to prevent biased outcomes that can arise from inadequate consideration of diversity in AI design and implementation.
The Act mandates ongoing monitoring to detect evolving biases, a crucial step in mitigating algorithmic discrimination. However, its reliance on self-regulation assumes that deployers will voluntarily report biases, despite strong incentives to do otherwise. This is particularly problematic given that gender bias in AI disproportionately harms marginalised communities, who often lack institutional power to demand accountability. Furthermore, the Act lacks robust redress mechanisms, offering no clear pathways for individuals affected by algorithmic discrimination – such as hiring AI systems that disproportionately reject women – to seek justice. To address these gaps, feminist scholars advocate for the establishment of independent AI auditing bodies with expertise in gender and racial justice, alongside strengthened complaint mechanisms that empower affected users to challenge biased AI decisions effectively (O’Neil, Sargeant & Appel, Reference O’Neil, Sargeant and Appel2024).
5. Concluding remarks
The AI Act, while a significant step towards regulating high-risk AI systems, falls short in addressing gendered biases and the deeper social and structural inequalities embedded in AI technologies, as was extensively explained above. Although the AI Act acknowledges the importance of non-discrimination, it lacks a comprehensive gender-sensitive approach, particularly regarding gender inclusivity for marginalised groups such as transgender, non-binary and intersex individuals. A feminist and intersectional analysis reveals that the AI Act treats gender bias primarily as a technical issue, rather than recognising it as a systemic challenge rooted in historical power imbalances, while it perpetuates a binary distinction of gender.
Moreover, the Act’s limited provisions on gender-related protections, such as the exclusion of gender from the list of sensitive data categories and its failure to mandate intersectional gender audits, highlight the need for a more inclusive and robust regulatory framework. To truly address the risks and harms posed by AI, particularly in high-risk domains like recruitment, healthcare and generative AI, the AI Act should integrate feminist legal principles that prioritise gender equality, accountability, intersectionality and the dismantling of structural biases. Without such critical interventions, the AI Act risks reinforcing existing inequalities rather than mitigating them, ultimately leaving marginalised communities vulnerable to the discriminatory impacts of AI technologies.
AI Act presents a regulatory framework aimed at ensuring the safety, transparency and accountability of AI systems, with a particular focus on non-discrimination and mitigation of bias. The harmonisation process is central to this effort, providing a unified approach across Member States while recognising the importance of intersectionality is missing. A feminist perspective highlights the need for a stronger intersectional approach, addressing not only gender equality but also accounting for race, class, disability and other social factors that influence individuals’ experiences within AI. Ensuring that the AI Act aligns with the principles of gender equality and human rights requires a careful, critical examination of its various processes, from conformity assessments to standardisation.
Conformity assessments, while essential in guaranteeing compliance with the AI Act, must go beyond traditional evaluations to include a gender-sensitive and intersectional lens. This ensures that AI systems do not perpetuate existing biases or exacerbate inequalities. The emphasis on the inclusion of diverse voices and perspectives, especially those of marginalised groups, is crucial in making AI systems truly fair and equitable. Standardisation processes, when infused with feminist principles, can significantly contribute to the creation of AI systems that are not only technically compliant but also socially responsible, addressing gendered harms and promoting inclusivity.
Ultimately, the AI Act has the potential to promote a fairer digital future, but this depends on the continuous and active involvement of diverse stakeholders, particularly marginalised communities, in shaping AI development and regulation. By embedding feminist principles – such as inclusivity, accountability and intersectionality – into the core of AI governance, the EU can pave the way for AI technologies that serve the interests of all, rather than reinforcing existing power structures and inequalities. While this is mostly for the safeguards and oversight mechanisms adopted for high-risk AI systems, the AI Act risks overlooking the gendered and intersectional impacts of these technologies. A feminist and intersectional approach to AI regulation is essential to ensure that AI systems do not perpetuate existing inequalities, especially in sensitive areas like hiring, healthcare and law enforcement. This requires incorporating gender-aware risk assessments, embedding GIA and promoting inclusive and participatory design processes. Additionally, strengthening accountability through independent auditing bodies and enhancing redress mechanisms will help ensure that AI systems are not only safe and effective but also fair and just for marginalised communities. By integrating these measures, the AI Act can move towards a more inclusive and equitable regulatory framework that upholds the dignity, autonomy and rights of all individuals, particularly those most vulnerable to the harms of biased AI systems.
Funding statement
This research was supported by FARI-AI for common good.
Competing interests
None.
Anastasia Karagianni is a Doctoral Researcher at the Law Science Technology and Society (LSTS) research group of the Law and Criminology Faculty of Vrije Universiteit Brussels (VUB) and former FARI Scholar. Her academic background is mainly based on International and European Human Rights Law, as she holds an LL.M. from the Department of International Studies of Aristotle University of Thessaloniki. During her Master’s studies, she was an exchange student for one year at the Faculty of International Law at KU Leuven. She has been also a Visiting Researcher at the iCourts research team of the University of Copenhagen. Her thesis is titled “Divergencies of Gender Discrimination in the EU AI Act Through Feminist Epistemologies and Epistemic Controversies.”
Besides her academic interests, Anastasia is a digital rights activist, since she is a co-founder of DATAWO, a civil society organisation based in Greece advocating on gender equality in the digital era, and founder of @femme_group_BrusselsGR. Anastasia Karagianni was MozFest Ambassador 2023 and Mozilla Awardee for the project “A Feminist Dictionary in AI” – of the Trustworthy Artificial Intelligence working group.