I. Introduction
There are over 250 artificial intelligence (AI) tools for the human resources (HR) sector on the market and there is a growing push to adopt these tools in the employment sector.Footnote 1 The increasing adoption of AI in HR has spurred scholarship around Smart Human Resources 4.0 (SHR 4.0), which concerns the use of AI and big data analytics for the management of employees,Footnote 2 and algorithmic human resource management (AHRM), which concerns ‘the use of software algorithms that operate based on digital data to augment HR-related decisions and/or to automate HRM activities’.Footnote 3
A common application of AI in human resources is in the recruitment and selection phases, namely in the steps of sourcing, screening of resumes and candidate matching.Footnote 4 We are currently witnessing a proliferation of targeted job advertizements written by neural networks—search, ranking, and recommendation algorithms that favour employee–employer matches by ranking candidates for openings. These algorithms extract information from resumes to score employees against job descriptions and use machine learning software for screening candidates’ responses and expressions in video interviews, to help HR in the hiring of these candidates.Footnote 5
Among the main benefits of AI adoption in HR are the possibility to automate routine tasks, and to save costs to increase recruitment efficiency and enhance productivity in HR development processes.Footnote 6 Nevertheless, algorithm-based decision-making can also produce discriminatory results and exacerbate existing disadvantages for marginalized groups in society. An example is the well-known case of the Amazon AI recruiting tool, which had been trained on biased historical data on dominantly white male chief executive officers (CEOs) and software engineers, which resulted in the system recommending such top positions to males over females.Footnote 7
This raises several ethical questions regarding the design and development of systems aimed at eliminating or mitigating algorithmic bias,Footnote 8 but also related to the intensification of informational and power asymmetries in employment relationships, and data rights and data protection in the context of algorithmic management tools in the workplace.Footnote 9 The European Union’s 2024 Artificial Intelligence Act (AI Act) classifies AI systems used for the recruitment and selection of candidates as ‘high-risk’,Footnote 10 listing among their potential threats the perpetuation of historical patterns of discrimination and the undermining of workers’ fundamental rights to data protection and privacy.Footnote 11
HRM has been described as ‘the management of work and people towards desired ends’.Footnote 12 However, many current managerial practices and businesses’ HR policies do not adequately address the question of workers and their fundamental rights and freedoms when setting these desired ends.Footnote 13 The use of AI in HRM presents new challenges that organizations should be aware of in decisions that have traditionally been made by human managers, especially concerning data rights and protections to secure fair chances in the right to work.
In this paper, we address the question of workers and their fundamental rights as AI becomes an essential intermediary in recruitment decisions by organizations. We analyse how and to what extent algorithmic biases in HR can manifest and investigate how such biases affect a variety of workers’ fundamental rights. Specifically, we focus on (1) the right to equality, equity and non-discrimination; (2) the right to privacy and, finally, (3) the right to work. The use of algorithmic decision-making in HRM strategies and functions can pose a significant and pervasive risk to human rights at work. Based on our analysis, we provide recommendations around ethical safeguards that can further advance forms of responsible AI governance in HR-related decisions and activities to help promote workers’ fundamental rights.
II. Right to Equality, Equity and Non-Discrimination
The right to equality entails different but interconnected concepts. Equality entails ensuring uniform access to resources and opportunities regardless of individual differences. Equity consists of acknowledging that different people have different needs and addressing these differences to achieve fair outcomes.Footnote 14 Non-discrimination involves actively preventing and addressing unequal treatment based on race, gender, ethnicity, religion or other protected characteristics and special categories of data.Footnote 15 The Office of the United Nations High Commissioner for Human Rights (OHCHR) underlined that a human-rights-based approach to AI requires the application of the principles of equality and non-discrimination since AI-based technologies can exacerbate existing inequalities.Footnote 16
However, it may be hard to determine where discriminatory results arise, e.g., during the training phase of systems, which focuses on learning from existing data, or in the final alignment phase when developers adjust systems to generate desired outputs.Footnote 17 AI can lead to discrimination in different ways. Systems can be trained on biased data, reflecting and amplifying at scale existing biases already entrenched in hiring practices from early years like in the case of the Amazon AI recruiting tool.Footnote 18 However, discrimination can occur even when organizations define and select the features or characteristics (the types of data) that AI systems use for prediction in order to select job applicants who will be good employees.Footnote 19
In this case, even if organizations omit or abstain from using protected characteristics in datasets and algorithms, this does not prevent discrimination, since there may be variables that can substitute the protected characteristics through correlation with certain protected group memberships.Footnote 20 These correlations might disproportionately harm protected group members, posing the risk of a peculiar form of algorithmic discrimination, that of proxy discrimination.Footnote 21
A. Proxy Discrimination
Proxy discrimination arises when membership in a protected group is indirectly encoded in other data, that is, it occurs when a seemingly neutral piece of data is correlated with and acts as a proxy or stand-in for a protected characteristic.Footnote 22 For example, a company might use a system that predicts employment tenures based on the measurement of the distance between the candidate’s home and the office. In this case, due to patterns of residential segregation, the zip code can be used by the system as a proxy for race, even if race as a variable is removed in its predictive model, and this can lead to biased outputs.Footnote 23
An organization that intentionally treats individuals differently based on race would be engaging in direct discrimination, i.e., create a situation in which one person is treated less favourably than another is, has been or would be treated in a comparable situation on grounds of a protected characteristic.Footnote 24 However, proxy-based discriminatory practices can manifest in cases where there is no direct access or consideration of protected data, and can also happen by accident or inadvertently, without organizations being aware or realising that certain correlations between proxy variables and protected characteristics can lead to discrimination.Footnote 25
AI systems are programmed to find correlations between input data and target variables, and may even attribute meaning to certain data patterns where there is none, affecting the prediction of the algorithm in a discriminatory way.Footnote 26 Proxies can be built out of bits of information that are not directly related to the applicant or provided or shared, but inferred from different sources and used to build a comprehensive profile. This can lead to classification biases, referring to a situation where ‘employers rely on classification schemes, such as data algorithms, to sort or score workers in ways that worsen inequality or disadvantage along the lines of race, sex, or other protected characteristics’.Footnote 27 These biases stem from, as well as reproduce, predictive classification models trained on unrepresentative datasets.Footnote 28 For example, gender as a protected characteristic can be inferred from the gendered information that individuals enlist in their resumes, as in the case of the Amazon AI recruiting tool, which downgraded resumes that included ‘all-women’s colleges’.Footnote 29
Even without containing protected characteristics, resumes may still reveal sensitive information about job applicants, albeit in subtle ways. For example, political orientation can be inferred due to links to social media and the use of specific hashtags.Footnote 30 Names and spoken languages may correlate with certain migration backgrounds; or work gaps, i.e., periods without formal employment in the resume, may put women at a systemic disadvantage due to gender asymmetries in caregiving responsibilities.Footnote 31 Finally, also stylistic preferences such as the use of agentic language, i.e., a style typically associated with assertiveness and self-promotion, which has often been favoured in male-dominated professional environments—can be scanned by HR and algorithms and used as a cue for a candidate’s fit for their company culture.Footnote 32
B. Debiasing
Debiasing has emerged as one of the potential solutions to design and implement systems that are consistent with workers’ right to equality and to avoid cases of algorithmic-based discrimination. This method aims at making the outputs of a system ‘fair’, or ‘non-discriminative’, either by transforming the training dataset, changing the method to optimize and train the algorithm or post-processing the outputs of the model.Footnote 33 For example, HR companies must avoid using training data where certain protected groups or characteristics are underrepresented.Footnote 34 To concretely address this challenge, organizations can conduct frequent data quality checks, remove data points that reflect past biases or may be predictive of protected characteristics and conduct frequent internal audits and impact assessments to identify, monitor and prevent discriminatory risks.
However, we must recognize that there is no such thing as a completely ‘unbiased’ AI system.Footnote 35 Although AI systems aim for objectivity and clarity in their procedures, they rely on input data, which may consist of an incomplete or unbalanced dataset. Also, an excessive and technocentric emphasis on debiasing techniques as a necessary and sufficient protection against the discriminatory risks of HR processes can obfuscate the differentiated ways in which inequality manifests in these social contexts.Footnote 36 Addressing cases of proxy discrimination, e.g., defining when something counts as a proxy for a certain protected social group—requires a broader qualitative perspective to tackle the complexity, context-sensitive nature and socio-cultural relevance of data. For example, protected characteristics such as gender cannot be understood as static and fixed data points, as they are inherently multidimensional and intersectional.
In recruitment, the debiasing method must be integrated with considerations about the context of the application of AHRM tools, which involves different stakeholders (e.g., HR, workers and workers’ representatives) and their respective values, organizational and institutional changes and the possibility of recourse and explanations around decisions taken by AHRM tools.Footnote 37 Dedicated proxy reduction approaches that remove or substitute gender identifiers or proxy features most predictive of protected characteristics cannot be effective without a clear understanding of the underlying sociotechnical system. For example, some areas like the programming sector may need positive gender discrimination techniques while others like nursing and education, may need the opposite.Footnote 38
In summary, adopting a broader perspective on debiasing that goes beyond its technocentric understanding allows us to reconsider how individual and group identities are understood as and through data,Footnote 39 a theme that has fundamental relevance also in relation to privacy, as we will demonstrate in the next section.
III. Right to Privacy
As in the case of equality, the concept of privacy is complex, encompassing interconnected yet distinct ideas of freedom, autonomy and dignity.Footnote 40 In recruitment processes, privacy entails the protection of job applicants’ personal data and their informational identity from unauthorized access, misuse or discrimination.Footnote 41 The right to privacy is often in tension with the company’s right to information, which requires that employers assess the suitability of the job applicants.
Previous work experience, educational qualification, skills and references can be considered relevant to the screening by HR and algorithms, but also personal data—whenever an individual can be identified, directly or indirectly, by reference to an identifier such as a name, an identification number, a location data, but also to other factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of the person.Footnote 42 These factors may be scanned and used to infer the suitability for a job of a given applicant. Such intricate data profiling might threaten the individual’s right to privacy and self-determination, despite being valorized for its efficiency and objectivity.
A. Individual and Shared Breaches
Individual breaches involve the collection, storage and use of data without transparency. The European Union (EU) introduced rules specific to data protection in the General Data Protection Regulation (GDPR). While GDPR provides guidelines for processing personal data, concerns regarding whether explicit consent is freely given in those cases remain.Footnote 43 Job applicants, who actively seek a position, are at a clear disadvantage, as companies hold the power to grant or refuse employment. This imbalance can lead to consent being granted under pressure, and applicants might feel compelled to agree to data processing for fear of jeopardising their chances of getting hired. A lack of transparency about how their data will be processed can create further pressure and anxiety for the applicants as well. Therefore, the consent provided for the data processing can be informed by fear and risk of refusal due to the informational and power asymmetry between workers and employers.Footnote 44 Another strategy that companies can use to circumvent regulation on the processing of personal data, and in cases in which such data is inaccessible, is the creation of data profiles with ‘public’ data from social media profiles and other online traces, such as purchasing history or location data, without candidates consenting to them.Footnote 45
For instance, Facebook ‘likes’ or social media activities can be screened by AI tools to infer sexual orientation and race.Footnote 46 Data collected from cookies saved when searching, e.g., for a wheelchair may result in the person being disqualified. AI systems may apply such coding to positions like a remote programmer, for which being a wheelchair user is unlikely to affect performance.Footnote 47 Finally, there can be cases of ‘algorithmic black-balling,’ where the data collection of job candidates’ history of rejections for past job applications can be used by AI systems to sort these candidates into the category of the permanently unemployable and, as such, qualify them as unworthy of future jobs.Footnote 48
B. A Bigger Picture of Privacy
Data privacy breaches are not just about static and individual identifiers enlisted or inferred but extend to cases of data aggregates that are progressively generated, circulated and evaluated.Footnote 49 As mentioned above, ad hoc correlations—constructed out of factors like having a specific postcode or being a member of a specific market segment or profiling group that searches for wheelchairs—erase the individual as it is subsumed into a ‘shared identity token.’ This identity token can be volatile, i.e., assembled and constrained by a third-party interest for a time or purpose, and, more importantly, impose a group meaning that is not reducible and does not represent and/or imperfectly reflect the self-understanding and identities of individual members contained within the group. The scale, complexity and shared ownership of these identity groups might undermine efforts to protect privacy, conceived as the right to control data about oneself, and might interfere with data subjects’ capacity to shape and control identity.Footnote 50
Moreover, this can shed light on the fact that protected characteristics or proxies for protected characteristics do not exhaust the category of data that can be used to discriminate. Systems can find seemingly irrelevant correlations in data and can generate new groups of people that remain outside both the scope of data protection and anti-discrimination: for instance, a Dutch insurance company was found to charge extra if customers live in apartments whose civic number contains a letter, probably because historical data shows a positive correlation between living at such an address and the chance of being in a car accident.Footnote 51
Predictions by algorithms based on correlations and inferences on shared identity tokens raise important questions that are not adequately addressed by adopting an individual perspective on potential human rights infringements. Even if companies do not use special categories of data,Footnote 52 there may be other grounds that may lead to privacy and equality infringements. Assessing those grounds may require looking at data protection in terms of group privacy that is not reducible to the privacy of individuals forming the group.Footnote 53 More importantly, it may require discussing the legitimacy of systems predictions and workers’ demands for justification and protection with reference to the cultural, social, professional and institutional context in which algorithmic HR decision-making is implemented.
IV. Right to Work
AHRM has a deep impact on data protection and discrimination, by jeopardizing workers’ ability to make sense of the data collected and used to recruit them, and by accentuating workplace inequalities when models rely on discriminatory proxies or reflect biased data to make decisions. These risks have received significant attention in scholarly debates and also in policy and regulatory discussions, which have been struggling to understand the impact of AI on the quality of work in the coming years.Footnote 54 In August 2024, the EU’s new AI Act came into force, which is the first comprehensive regulation concerning AI and is expected to become a model for AI governance worldwide. The AI Act follows a risk-based approach and aims to ensure that AI systems placed on or used in the EU market are safe and respect fundamental rights. The AI Act covers AI across a broad range of sectors, and as mentioned previously, classifies AI systems used for the recruitment or selection of candidates as ‘high-risk’,Footnote 55 since they may perpetuate historical patterns of discrimination and also undermine workers’ fundamental rights to data protection and privacy, triggering additional protection requirements.Footnote 56
A. Beyond the Rights to Equality and Privacy in AHRM
At the end of 2020, trade unions affiliated with the Italian General Confederation of Labour (CGIL) took the food-delivery company Deliveroo to court in Bologna, since they argued that the algorithm, named Frank, that was used by the company to assess parameters for booking had obstructed and violated fundamental rights of workers, such as the right to strike. According to the Court of Bologna, Frank, based on two parameters—the workers’ reliability and participation—treated equally those who do not participate in the booked session for futile reasons and those who do not participate because they are striking (or because they are sick, have a disability or assist a vulnerable person as caregivers).Footnote 57
The AI-management system in the case of Deliveroo left less room for taking into consideration workers’ needs and interests, and for exploring the reasons why workers have refrained from work and ended up neglecting context-sensitive and individual circumstances in its algorithmic decision-making. This case shows how the management created by algorithms can make it difficult for workers to create social bonds and collective power, and form and join a trade union for the protection of their interests, as contained in Article 11 of the European Convention of Human Rights (ECHR) and the International Covenant on Civil and Political Rights (ICCPR) on the right to freedom of association.
Another case in point is facial scans powered by AI and emotion recognition or sentiment analysis technologies. These AI tools can extract information from visual or audio data during job interviews and can infer characteristics, personality traits and potential hireability of job applicants.Footnote 58 A team of reporters from Bayerischer Rundfunk, a German public broadcasting company, performed several experiments with such a system, where they observed the AI responding to different outfits (e.g., wearing glasses, a scarf) and to different settings (e.g., bookshelf as a background) with considerably different results. The reporters revealed that this might potentially perpetuate stereotypes and cost candidates access to the job.Footnote 59
Attempts to ‘read’ people’s reactions in this way, and the data collection and extraction of information from verbal and non-verbal behaviour to create a personal profile, might lead to discrimination against persons of certain racial or ethnic origins or with certain religious beliefs—and might deeply transform the way individuals experience and enjoy the freedom of expression as a human right, i.e., being allowed to speak and freely express one’s thoughts and opinions, as underlined by Article 10 of the ECHR and confirmed by Article 11 of the Charter of Fundamental Rights of the European Union.Footnote 60
Individuals in these examples are constrained by the knowledge that they are subject to algorithms and data to determine their access to work, which goes beyond the rights to equality and privacy, to the right to express themselves in ways that are integral to their identity.Footnote 61
B. For a Responsible Governance of AHRM on Ethical Safeguards
One of the first steps and challenges in the governance of AHRM is ensuring compliance with legal regulations. As already mentioned, the AI Act considers AI systems used in recruitment as high risk. Accordingly, it imposes a self-assessment by providers for such systems, but many commentators contend that internal assessment procedures are insufficient and might easily become a rubber stamp or empty formalities.Footnote 62 Article 35 of the GDPR imposes an obligation to conduct data protection impact assessments (DPIA) for high-risk systems, like those used in the recruitment pipeline, in order to identify the risks to the rights and freedoms of data subjects and potential ways to address and minimize them.Footnote 63
However, it is important to note that these legal frameworks and mechanisms should not be interpreted as a mere ‘tick-box exercise’Footnote 64 by employers. Self-assessments, DPIA or debiasing techniques and bias audits, as already analysed in the paper, notwithstanding being important ex ante regulatory measures, are still developing techniques with relevant limitations, and an excessive focus on their technical implementation alone may come at the cost of not accounting sufficiently for the social complexities, emerging values and justice requirements and needs of diverse stakeholders involved in and impacted by their use.Footnote 65
C. Promise and Perils of Synthetic Data
The GDPR contains a prohibition on using certain types of extra sensitive data, called ‘special categories’ of data, i.e., personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, sexual orientation, data on health, genetic data and biometric data since processing this data is considered particularly sensitive, might interfere with data subjects’ privacy, be used for unexpected or harmful purposes, and lead to discrimination or exclusion.Footnote 66 However, there are exceptions to this ban.Footnote 67 The recently adopted AI Act introduces an exception to enable AI debiasing and auditing—as these strategies to be effective, require access to sensitive data to test and audit systems.Footnote 68
The exception in the AI Act only applies when organizations cannot use other data, such as anonymous or synthetic data. In particular, synthetic data to enable AI debiasing is considered a technically appropriate safeguard by the AI Act.Footnote 69 Synthetic data can be employed by organizations due to data unavailability, poor quality or high risk, due to privacy, legal or other ethical reasons.Footnote 70 But what are synthetic data? Synthetic data are data that has been algorithmically generated by a model rather than extracted or collected from real-world contexts, to train other models.Footnote 71 The purpose of synthetic data is to reproduce some structural and statistical properties of real data, but at the same time to overcome the challenges and risks of real-world data when this latter is sensitive or biased.Footnote 72
Synthetic data generation can be a potential solution to access high-quality data for training AI models, and can address (a) privacy and (b) equality challenges of datasets and models. Indeed, synthetically generated data (a) can be used to train AI models when real data are sensitive and their use is restricted and protected by privacy regulations (e.g., GDPR), and (b) can address imbalanced training datasets, mitigating the risk of amplification of biases (e.g., historical biases on gender or race) and improving the diversity of training data.Footnote 73 In sum, synthetic data generation promises to support data sharing and strategies for data debiasing in contexts where data protection is required.
However, synthetic data generation and use is currently a novel and understudied topic, and while promising in HRM, it still poses data protection and data debiasing challenges. Studies have demonstrated that synthetic data often builds on real datasets, and thereby still poses information disclosure risks that could potentially lead to ‘re-identification’, i.e., the likelihood that, notwithstanding anonymisation techniques, it may be still possible to identify individuals and reveal possibly sensitive information, transforming non-personal data into personal information—and this creates legal uncertainty and practical difficulties for the generation and processing of synthetic data.Footnote 74
In recruitment, synthetic data have been studied for bias mitigation purposes that aim to cover properties in training datasets that are not sufficiently represented in real datasets, and this point is especially relevant for avoiding discriminatory outcomes.Footnote 75 However, if the mitigation of bias during the synthetic data generation process is not deployed responsibly, the use of synthetic data can also lead to the amplification and propagation of existing bias. For example, the potential of synthetic datasets of resumes to train models is strictly dependent on the well-curated collection and processing of high-quality resume data that does not under-represent groups that are historically discriminated against, and on the introduction of as much variability as possible in the synthetic resumes generation in order to achieve diversity while preserving privacy.Footnote 76 Synthetic datasets may overgeneralize or overfit their results, i.e., introduce loss of details and information or introduce errors, by fitting the models too tightly to the available data and worsening the issue of bias they are meant to address.Footnote 77
It is important to note that the adoption of a technical safeguard like synthetic data, notwithstanding being a potential privacy and equality-preserving strategy, does not structurally remove the need to use sensitive data, as these data are essential for the development and validation of bias detection models, and must still be collected to create synthetic datasets.Footnote 78
D. Workers’ Voice
The challenge to ensure that the use of AHRM is consistent with workers’ human rights might entail other more suitable regulatory instruments, like methods for collective bargaining.Footnote 79 As in the case of Deliveroo in Bologna, collective bargaining and trade union actions can offer an effective way to start negotiations about workers’ interests and a whole range of other conditions in algorithmic management, like transparency requirements from companies about workers’ data use, storage and management.Footnote 80 This can also help in the development of fair data processing in the context of AHRM, and of appropriate safeguards that ensure that workers can receive meaningful information on how employers use their data, and about the logic involved in the algorithmic decision-making that uses and processes that data.Footnote 81
One relevant and ground-breaking advancement in this direction was recently put forward in April 2024, when the EU’s institutions reached an agreement to adopt the European Directive on Platform Work, which aims to regulate, for the first time in the EU, the use of algorithms in the workplace.Footnote 82 Relevant to our discourse, in particular, is the third chapter of the draft Directive that concerns algorithmic management. The third chapter affirms that workers will receive information from platforms about the algorithmic systems used to hire them (e.g., about actions and parameters that such systems implement and use for their automated decisions) and that they will be able to request and obtain an explanation about—and request a human review of—relevant decisions made through algorithms that affect their working conditions and might infringe on their rights.Footnote 83 Moreover, among the provisions of the Directive, there is one concerning consultation, according to which platforms will be obliged to inform and consult with representatives of platform workers and workers themselves on algorithmic management decisions.Footnote 84 Therefore, the Directive introduces the role of collective actors and collective rights for information and consultation duties in the use of AHRM.
Notwithstanding its relevant proposals, the Directive could have been strengthened by introducing an obligation of collective bargaining or veto powers by workers’ representatives on employers’ introduction of algorithmic management tools.Footnote 85 And, the AI Act, while it provides specific requirements of transparency and self-assessment for certain AI systems, does not provide safeguards for respecting the working conditions of people directly affected by the use of AI.Footnote 86 Also, the GDPR, while it grants rights to workers regarding the processing of their personal data, does not encompass important collective aspects inherent in labour law, including the role of workers’ representatives, and information and consultation of workers.Footnote 87
Shortcomings of the current proposals stem from their ‘techno-deterministic’ and technocentric approach, which ignores the role of social partners in regulating the introduction and operation of AHRM and considers these tools as a given and not subject to open-ended negotiation and social dialogue.Footnote 88 Instead, measures like collective bargaining and transparency requirements can be considered as ways to protect and strengthen workers’ voices: their ability to have a meaningful influence on how work and livelihood are arranged, and to raise concerns within an organization can improve management practices and policies.Footnote 89
However, ex post trade union actions like in the case of Deliveroo in Bologna are not sufficient to grant and secure workers’ fundamental rights. To foster social dialogue, it is fundamental to implement ex ante strategies that can ensure that systems align with workers’ interests, values and needs.Footnote 90 Beyond the recourse to technical solutions and the adherence to legal regulations, the challenge of a novel and algorithmic form of HRM requires organizational and behavioural solutions, which can complement respect for data protection and equality with methods for stakeholder engagement.Footnote 91
Also, the integration of AHRM into organizations should entail the identification of precise and detailed decision-making processes for taking into account workers’ rights in HR processes, by appointing departments and/or individuals—internal or independent of companies, e.g., external watchdogs as auditors, NGOs, certifying entities, trusted third parties (TTPs)Footnote 92—in charge of the monitoring and evaluation of systems, and by selecting special procedures for reporting complaints and concerns, and for receiving explanations and tailored responses, and/or compensations.Footnote 93 In addition to those points, the introduction of AHRM also opens up the need for specialized training for workers, or for the creation of new organizational roles like that of the ‘HR Algorithm Auditor’, whose role would be to ‘rehumanize’ management decisions and ensure transparency and reliability of algorithms, especially in cases where decisions about employees’ future are taken.Footnote 94
Studies have demonstrated that AHRM can provide value and foster autonomy for workers with forms of ‘algoactivism’—novel organizational and behavioural tactics that workers deploy to monitor their workspaces and prevent employers from gaining excessive managerial control of their labour and labour processes.Footnote 95 Algoactivism tactics can be realized through the means of non-cooperation and data obfuscation, e.g., ignoring algorithmic recommendations or finding ways to understand how AHRM tools classify and assign tasks or parameters; bypassing algorithmic decisions with the implementation of behavioural patterns that prevent workers from being assigned undesirable/not-suitable tasks and decisions.Footnote 96
But beyond being a means of resistance, AHRM can also be an organizational and proactive resource that benefits workers. It can maximize workers’ capabilities to save time and resources and can allow purposeful forms of re-design and co-creation, which can gradually change and shift existing norms and rules embedded in traditional HRM practices towards organizational practices that reinforce workers’ autonomy.Footnote 97
V. Conclusions
Given the transformative impact of AHRM, organizations must navigate the balance between the challenges and advantages of this novel management practice. Hiring the right (that is, skilled, experienced and well-trained) people is vital for the organization’s success and competitive role in the market.Footnote 98 However, as discussed in the paper, the introduction of algorithms into the process of recruitment opens a range of questions and concerns related to the protection of workers’ rights to equality, privacy and their right to work. We make our case that we need to go beyond the analysis of the utility gained, in terms of adherence to economic productivity and business accuracy, and also beyond a matter of technical and legal compliance.
An excessive technocentric and legal emphasis on the strategies and safeguards to adopt against the risks of AHRM can obfuscate the fact that equality, privacy and work infringements cannot be adequately addressed without reference to the context in which algorithmic HR decision-making takes place. This context involves different stakeholders (e.g., policymakers, organizations, HR, workers and workers’ representatives) and their respective values, organizational and institutional changes and different sociotechnical challenges and dynamics.
Yet, there is no principled and effective best practice to ensure that organizations holistically address the challenge of mitigating algorithmic discrimination and privacy violations in the workplace and develop frameworks for responsible AI and data governance to protect workers’ rights. In the case of the right to equality, we discussed how debiasing methods must be integrated with considerations about the complexity, context-sensitive nature and socio-cultural relevance of data and discrimination. In the same way, for the right to privacy, we analysed how the introduction of algorithmic decision-making in workplace management and evaluation opens up the need to adopt a more holistic conception of privacy, which might allow us to reconsider how individual and group identities are understood as and through data and context.
However, the right to equality and the right to privacy are not the only human rights at risk, as AHRM can impact the ability of workers to access other fundamental rights and make sense of their individual and collective workplace practices. This collective dimension of human rights is still an understudied topic in the literature on the impact of AI on the quality of work.Footnote 99 Beyond the recourse to technical solutions and the adherence to legal regulations, we argue that the challenge of a novel and algorithmic form of HRM requires developing data ecosystems and collective organizational and behavioural solutions that can complement data protection and equality with methods for stakeholder engagement and accountability governance structures. The responsible governance of AHRM requires appropriate ethical safeguards driven by fundamental rights, but also ways to translate these rights into ethical AI practices that are contextually and institutionally grounded, and into work environments and organizational cultures that are responsive to the values of workers as these emerge across various sectors and diverse and real-world scenarios.
Data availability statement
Data sharing does not apply to this article as no datasets were generated or analysed during the current study.
Contributions
Marianna Capasso and Payal Arora developed the concept of the article, and Marianna Capasso took the lead in writing it. Deepshikha Sharma contributed to the section on the right to privacy, and Celeste Tacconi contributed to the section on the right to equality. Payal Arora has performed multiple integrations and revisions of the draft.
Financial support
The research of Marianna Capasso and Payal Arora reported in this work is part of the FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) project that received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101070212. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
Competing interest
The authors declare no potential conflicts of interest with respect to the research, authorship and/or publication of this article.