Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-01-10T22:39:55.909Z Has data issue: false hasContentIssue false

On the Right to Work in the Age of Artificial Intelligence: Ethical Safeguards in Algorithmic Human Resource Management

Published online by Cambridge University Press:  06 January 2025

Marianna Capasso*
Affiliation:
Department of Media and Culture Studies, Utrecht University, Utrecht, The Netherlands
Payal Arora
Affiliation:
Department of Media and Culture Studies, Utrecht University, Utrecht, The Netherlands
Deepshikha Sharma
Affiliation:
University of Twente, Enschede, The Netherlands
Celeste Tacconi
Affiliation:
Independent researcher
*
Corresponding author: Marianna Capasso; Email: m.capasso@uu.nl
Rights & Permissions [Opens in a new window]

Abstract

Algorithmic human resource management (AHRM), the automation or augmentation of human resources-related decision-making with the use of artificial intelligence (AI)-enabled algorithms, can increase recruitment efficiency but also lead to discriminatory results and systematic disadvantages for marginalized groups in society. In this paper, we address the issue of equal treatment of workers and their fundamental rights when dealing with these AI recruitment systems. We analyse how and to what extent algorithmic biases can manifest and investigate how they affect workers’ fundamental rights, specifically (1) the right to equality, equity, and non-discrimination; (2) the right to privacy; and, finally, (3) the right to work. We recommend crucial ethical safeguards to support these fundamental rights and advance forms of responsible AI governance in HR-related decisions and activities.

Type
Scholarly Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

I. Introduction

There are over 250 artificial intelligence (AI) tools for the human resources (HR) sector on the market and there is a growing push to adopt these tools in the employment sector.Footnote 1 The increasing adoption of AI in HR has spurred scholarship around Smart Human Resources 4.0 (SHR 4.0), which concerns the use of AI and big data analytics for the management of employees,Footnote 2 and algorithmic human resource management (AHRM), which concerns ‘the use of software algorithms that operate based on digital data to augment HR-related decisions and/or to automate HRM activities’.Footnote 3

A common application of AI in human resources is in the recruitment and selection phases, namely in the steps of sourcing, screening of resumes and candidate matching.Footnote 4 We are currently witnessing a proliferation of targeted job advertizements written by neural networks—search, ranking, and recommendation algorithms that favour employee–employer matches by ranking candidates for openings. These algorithms extract information from resumes to score employees against job descriptions and use machine learning software for screening candidates’ responses and expressions in video interviews, to help HR in the hiring of these candidates.Footnote 5

Among the main benefits of AI adoption in HR are the possibility to automate routine tasks, and to save costs to increase recruitment efficiency and enhance productivity in HR development processes.Footnote 6 Nevertheless, algorithm-based decision-making can also produce discriminatory results and exacerbate existing disadvantages for marginalized groups in society. An example is the well-known case of the Amazon AI recruiting tool, which had been trained on biased historical data on dominantly white male chief executive officers (CEOs) and software engineers, which resulted in the system recommending such top positions to males over females.Footnote 7

This raises several ethical questions regarding the design and development of systems aimed at eliminating or mitigating algorithmic bias,Footnote 8 but also related to the intensification of informational and power asymmetries in employment relationships, and data rights and data protection in the context of algorithmic management tools in the workplace.Footnote 9 The European Union’s 2024 Artificial Intelligence Act (AI Act) classifies AI systems used for the recruitment and selection of candidates as ‘high-risk’,Footnote 10 listing among their potential threats the perpetuation of historical patterns of discrimination and the undermining of workers’ fundamental rights to data protection and privacy.Footnote 11

HRM has been described as ‘the management of work and people towards desired ends’.Footnote 12 However, many current managerial practices and businesses’ HR policies do not adequately address the question of workers and their fundamental rights and freedoms when setting these desired ends.Footnote 13 The use of AI in HRM presents new challenges that organizations should be aware of in decisions that have traditionally been made by human managers, especially concerning data rights and protections to secure fair chances in the right to work.

In this paper, we address the question of workers and their fundamental rights as AI becomes an essential intermediary in recruitment decisions by organizations. We analyse how and to what extent algorithmic biases in HR can manifest and investigate how such biases affect a variety of workers’ fundamental rights. Specifically, we focus on (1) the right to equality, equity and non-discrimination; (2) the right to privacy and, finally, (3) the right to work. The use of algorithmic decision-making in HRM strategies and functions can pose a significant and pervasive risk to human rights at work. Based on our analysis, we provide recommendations around ethical safeguards that can further advance forms of responsible AI governance in HR-related decisions and activities to help promote workers’ fundamental rights.

II. Right to Equality, Equity and Non-Discrimination

The right to equality entails different but interconnected concepts. Equality entails ensuring uniform access to resources and opportunities regardless of individual differences. Equity consists of acknowledging that different people have different needs and addressing these differences to achieve fair outcomes.Footnote 14 Non-discrimination involves actively preventing and addressing unequal treatment based on race, gender, ethnicity, religion or other protected characteristics and special categories of data.Footnote 15 The Office of the United Nations High Commissioner for Human Rights (OHCHR) underlined that a human-rights-based approach to AI requires the application of the principles of equality and non-discrimination since AI-based technologies can exacerbate existing inequalities.Footnote 16

However, it may be hard to determine where discriminatory results arise, e.g., during the training phase of systems, which focuses on learning from existing data, or in the final alignment phase when developers adjust systems to generate desired outputs.Footnote 17 AI can lead to discrimination in different ways. Systems can be trained on biased data, reflecting and amplifying at scale existing biases already entrenched in hiring practices from early years like in the case of the Amazon AI recruiting tool.Footnote 18 However, discrimination can occur even when organizations define and select the features or characteristics (the types of data) that AI systems use for prediction in order to select job applicants who will be good employees.Footnote 19

In this case, even if organizations omit or abstain from using protected characteristics in datasets and algorithms, this does not prevent discrimination, since there may be variables that can substitute the protected characteristics through correlation with certain protected group memberships.Footnote 20 These correlations might disproportionately harm protected group members, posing the risk of a peculiar form of algorithmic discrimination, that of proxy discrimination.Footnote 21

A. Proxy Discrimination

Proxy discrimination arises when membership in a protected group is indirectly encoded in other data, that is, it occurs when a seemingly neutral piece of data is correlated with and acts as a proxy or stand-in for a protected characteristic.Footnote 22 For example, a company might use a system that predicts employment tenures based on the measurement of the distance between the candidate’s home and the office. In this case, due to patterns of residential segregation, the zip code can be used by the system as a proxy for race, even if race as a variable is removed in its predictive model, and this can lead to biased outputs.Footnote 23

An organization that intentionally treats individuals differently based on race would be engaging in direct discrimination, i.e., create a situation in which one person is treated less favourably than another is, has been or would be treated in a comparable situation on grounds of a protected characteristic.Footnote 24 However, proxy-based discriminatory practices can manifest in cases where there is no direct access or consideration of protected data, and can also happen by accident or inadvertently, without organizations being aware or realising that certain correlations between proxy variables and protected characteristics can lead to discrimination.Footnote 25

AI systems are programmed to find correlations between input data and target variables, and may even attribute meaning to certain data patterns where there is none, affecting the prediction of the algorithm in a discriminatory way.Footnote 26 Proxies can be built out of bits of information that are not directly related to the applicant or provided or shared, but inferred from different sources and used to build a comprehensive profile. This can lead to classification biases, referring to a situation where ‘employers rely on classification schemes, such as data algorithms, to sort or score workers in ways that worsen inequality or disadvantage along the lines of race, sex, or other protected characteristics’.Footnote 27 These biases stem from, as well as reproduce, predictive classification models trained on unrepresentative datasets.Footnote 28 For example, gender as a protected characteristic can be inferred from the gendered information that individuals enlist in their resumes, as in the case of the Amazon AI recruiting tool, which downgraded resumes that included ‘all-women’s colleges’.Footnote 29

Even without containing protected characteristics, resumes may still reveal sensitive information about job applicants, albeit in subtle ways. For example, political orientation can be inferred due to links to social media and the use of specific hashtags.Footnote 30 Names and spoken languages may correlate with certain migration backgrounds; or work gaps, i.e., periods without formal employment in the resume, may put women at a systemic disadvantage due to gender asymmetries in caregiving responsibilities.Footnote 31 Finally, also stylistic preferences such as the use of agentic language, i.e., a style typically associated with assertiveness and self-promotion, which has often been favoured in male-dominated professional environments—can be scanned by HR and algorithms and used as a cue for a candidate’s fit for their company culture.Footnote 32

B. Debiasing

Debiasing has emerged as one of the potential solutions to design and implement systems that are consistent with workers’ right to equality and to avoid cases of algorithmic-based discrimination. This method aims at making the outputs of a system ‘fair’, or ‘non-discriminative’, either by transforming the training dataset, changing the method to optimize and train the algorithm or post-processing the outputs of the model.Footnote 33 For example, HR companies must avoid using training data where certain protected groups or characteristics are underrepresented.Footnote 34 To concretely address this challenge, organizations can conduct frequent data quality checks, remove data points that reflect past biases or may be predictive of protected characteristics and conduct frequent internal audits and impact assessments to identify, monitor and prevent discriminatory risks.

However, we must recognize that there is no such thing as a completely ‘unbiased’ AI system.Footnote 35 Although AI systems aim for objectivity and clarity in their procedures, they rely on input data, which may consist of an incomplete or unbalanced dataset. Also, an excessive and technocentric emphasis on debiasing techniques as a necessary and sufficient protection against the discriminatory risks of HR processes can obfuscate the differentiated ways in which inequality manifests in these social contexts.Footnote 36 Addressing cases of proxy discrimination, e.g., defining when something counts as a proxy for a certain protected social group—requires a broader qualitative perspective to tackle the complexity, context-sensitive nature and socio-cultural relevance of data. For example, protected characteristics such as gender cannot be understood as static and fixed data points, as they are inherently multidimensional and intersectional.

In recruitment, the debiasing method must be integrated with considerations about the context of the application of AHRM tools, which involves different stakeholders (e.g., HR, workers and workers’ representatives) and their respective values, organizational and institutional changes and the possibility of recourse and explanations around decisions taken by AHRM tools.Footnote 37 Dedicated proxy reduction approaches that remove or substitute gender identifiers or proxy features most predictive of protected characteristics cannot be effective without a clear understanding of the underlying sociotechnical system. For example, some areas like the programming sector may need positive gender discrimination techniques while others like nursing and education, may need the opposite.Footnote 38

In summary, adopting a broader perspective on debiasing that goes beyond its technocentric understanding allows us to reconsider how individual and group identities are understood as and through data,Footnote 39 a theme that has fundamental relevance also in relation to privacy, as we will demonstrate in the next section.

III. Right to Privacy

As in the case of equality, the concept of privacy is complex, encompassing interconnected yet distinct ideas of freedom, autonomy and dignity.Footnote 40 In recruitment processes, privacy entails the protection of job applicants’ personal data and their informational identity from unauthorized access, misuse or discrimination.Footnote 41 The right to privacy is often in tension with the company’s right to information, which requires that employers assess the suitability of the job applicants.

Previous work experience, educational qualification, skills and references can be considered relevant to the screening by HR and algorithms, but also personal data—whenever an individual can be identified, directly or indirectly, by reference to an identifier such as a name, an identification number, a location data, but also to other factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of the person.Footnote 42 These factors may be scanned and used to infer the suitability for a job of a given applicant. Such intricate data profiling might threaten the individual’s right to privacy and self-determination, despite being valorized for its efficiency and objectivity.

A. Individual and Shared Breaches

Individual breaches involve the collection, storage and use of data without transparency. The European Union (EU) introduced rules specific to data protection in the General Data Protection Regulation (GDPR). While GDPR provides guidelines for processing personal data, concerns regarding whether explicit consent is freely given in those cases remain.Footnote 43 Job applicants, who actively seek a position, are at a clear disadvantage, as companies hold the power to grant or refuse employment. This imbalance can lead to consent being granted under pressure, and applicants might feel compelled to agree to data processing for fear of jeopardising their chances of getting hired. A lack of transparency about how their data will be processed can create further pressure and anxiety for the applicants as well. Therefore, the consent provided for the data processing can be informed by fear and risk of refusal due to the informational and power asymmetry between workers and employers.Footnote 44 Another strategy that companies can use to circumvent regulation on the processing of personal data, and in cases in which such data is inaccessible, is the creation of data profiles with ‘public’ data from social media profiles and other online traces, such as purchasing history or location data, without candidates consenting to them.Footnote 45

For instance, Facebook ‘likes’ or social media activities can be screened by AI tools to infer sexual orientation and race.Footnote 46 Data collected from cookies saved when searching, e.g., for a wheelchair may result in the person being disqualified. AI systems may apply such coding to positions like a remote programmer, for which being a wheelchair user is unlikely to affect performance.Footnote 47 Finally, there can be cases of ‘algorithmic black-balling,’ where the data collection of job candidates’ history of rejections for past job applications can be used by AI systems to sort these candidates into the category of the permanently unemployable and, as such, qualify them as unworthy of future jobs.Footnote 48

B. A Bigger Picture of Privacy

Data privacy breaches are not just about static and individual identifiers enlisted or inferred but extend to cases of data aggregates that are progressively generated, circulated and evaluated.Footnote 49 As mentioned above, ad hoc correlations—constructed out of factors like having a specific postcode or being a member of a specific market segment or profiling group that searches for wheelchairs—erase the individual as it is subsumed into a ‘shared identity token.’ This identity token can be volatile, i.e., assembled and constrained by a third-party interest for a time or purpose, and, more importantly, impose a group meaning that is not reducible and does not represent and/or imperfectly reflect the self-understanding and identities of individual members contained within the group. The scale, complexity and shared ownership of these identity groups might undermine efforts to protect privacy, conceived as the right to control data about oneself, and might interfere with data subjects’ capacity to shape and control identity.Footnote 50

Moreover, this can shed light on the fact that protected characteristics or proxies for protected characteristics do not exhaust the category of data that can be used to discriminate. Systems can find seemingly irrelevant correlations in data and can generate new groups of people that remain outside both the scope of data protection and anti-discrimination: for instance, a Dutch insurance company was found to charge extra if customers live in apartments whose civic number contains a letter, probably because historical data shows a positive correlation between living at such an address and the chance of being in a car accident.Footnote 51

Predictions by algorithms based on correlations and inferences on shared identity tokens raise important questions that are not adequately addressed by adopting an individual perspective on potential human rights infringements. Even if companies do not use special categories of data,Footnote 52 there may be other grounds that may lead to privacy and equality infringements. Assessing those grounds may require looking at data protection in terms of group privacy that is not reducible to the privacy of individuals forming the group.Footnote 53 More importantly, it may require discussing the legitimacy of systems predictions and workers’ demands for justification and protection with reference to the cultural, social, professional and institutional context in which algorithmic HR decision-making is implemented.

IV. Right to Work

AHRM has a deep impact on data protection and discrimination, by jeopardizing workers’ ability to make sense of the data collected and used to recruit them, and by accentuating workplace inequalities when models rely on discriminatory proxies or reflect biased data to make decisions. These risks have received significant attention in scholarly debates and also in policy and regulatory discussions, which have been struggling to understand the impact of AI on the quality of work in the coming years.Footnote 54 In August 2024, the EU’s new AI Act came into force, which is the first comprehensive regulation concerning AI and is expected to become a model for AI governance worldwide. The AI Act follows a risk-based approach and aims to ensure that AI systems placed on or used in the EU market are safe and respect fundamental rights. The AI Act covers AI across a broad range of sectors, and as mentioned previously, classifies AI systems used for the recruitment or selection of candidates as ‘high-risk’,Footnote 55 since they may perpetuate historical patterns of discrimination and also undermine workers’ fundamental rights to data protection and privacy, triggering additional protection requirements.Footnote 56

A. Beyond the Rights to Equality and Privacy in AHRM

At the end of 2020, trade unions affiliated with the Italian General Confederation of Labour (CGIL) took the food-delivery company Deliveroo to court in Bologna, since they argued that the algorithm, named Frank, that was used by the company to assess parameters for booking had obstructed and violated fundamental rights of workers, such as the right to strike. According to the Court of Bologna, Frank, based on two parameters—the workers’ reliability and participation—treated equally those who do not participate in the booked session for futile reasons and those who do not participate because they are striking (or because they are sick, have a disability or assist a vulnerable person as caregivers).Footnote 57

The AI-management system in the case of Deliveroo left less room for taking into consideration workers’ needs and interests, and for exploring the reasons why workers have refrained from work and ended up neglecting context-sensitive and individual circumstances in its algorithmic decision-making. This case shows how the management created by algorithms can make it difficult for workers to create social bonds and collective power, and form and join a trade union for the protection of their interests, as contained in Article 11 of the European Convention of Human Rights (ECHR) and the International Covenant on Civil and Political Rights (ICCPR) on the right to freedom of association.

Another case in point is facial scans powered by AI and emotion recognition or sentiment analysis technologies. These AI tools can extract information from visual or audio data during job interviews and can infer characteristics, personality traits and potential hireability of job applicants.Footnote 58 A team of reporters from Bayerischer Rundfunk, a German public broadcasting company, performed several experiments with such a system, where they observed the AI responding to different outfits (e.g., wearing glasses, a scarf) and to different settings (e.g., bookshelf as a background) with considerably different results. The reporters revealed that this might potentially perpetuate stereotypes and cost candidates access to the job.Footnote 59

Attempts to ‘read’ people’s reactions in this way, and the data collection and extraction of information from verbal and non-verbal behaviour to create a personal profile, might lead to discrimination against persons of certain racial or ethnic origins or with certain religious beliefs—and might deeply transform the way individuals experience and enjoy the freedom of expression as a human right, i.e., being allowed to speak and freely express one’s thoughts and opinions, as underlined by Article 10 of the ECHR and confirmed by Article 11 of the Charter of Fundamental Rights of the European Union.Footnote 60

Individuals in these examples are constrained by the knowledge that they are subject to algorithms and data to determine their access to work, which goes beyond the rights to equality and privacy, to the right to express themselves in ways that are integral to their identity.Footnote 61

B. For a Responsible Governance of AHRM on Ethical Safeguards

One of the first steps and challenges in the governance of AHRM is ensuring compliance with legal regulations. As already mentioned, the AI Act considers AI systems used in recruitment as high risk. Accordingly, it imposes a self-assessment by providers for such systems, but many commentators contend that internal assessment procedures are insufficient and might easily become a rubber stamp or empty formalities.Footnote 62 Article 35 of the GDPR imposes an obligation to conduct data protection impact assessments (DPIA) for high-risk systems, like those used in the recruitment pipeline, in order to identify the risks to the rights and freedoms of data subjects and potential ways to address and minimize them.Footnote 63

However, it is important to note that these legal frameworks and mechanisms should not be interpreted as a mere ‘tick-box exercise’Footnote 64 by employers. Self-assessments, DPIA or debiasing techniques and bias audits, as already analysed in the paper, notwithstanding being important ex ante regulatory measures, are still developing techniques with relevant limitations, and an excessive focus on their technical implementation alone may come at the cost of not accounting sufficiently for the social complexities, emerging values and justice requirements and needs of diverse stakeholders involved in and impacted by their use.Footnote 65

C. Promise and Perils of Synthetic Data

The GDPR contains a prohibition on using certain types of extra sensitive data, called ‘special categories’ of data, i.e., personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, sexual orientation, data on health, genetic data and biometric data since processing this data is considered particularly sensitive, might interfere with data subjects’ privacy, be used for unexpected or harmful purposes, and lead to discrimination or exclusion.Footnote 66 However, there are exceptions to this ban.Footnote 67 The recently adopted AI Act introduces an exception to enable AI debiasing and auditing—as these strategies to be effective, require access to sensitive data to test and audit systems.Footnote 68

The exception in the AI Act only applies when organizations cannot use other data, such as anonymous or synthetic data. In particular, synthetic data to enable AI debiasing is considered a technically appropriate safeguard by the AI Act.Footnote 69 Synthetic data can be employed by organizations due to data unavailability, poor quality or high risk, due to privacy, legal or other ethical reasons.Footnote 70 But what are synthetic data? Synthetic data are data that has been algorithmically generated by a model rather than extracted or collected from real-world contexts, to train other models.Footnote 71 The purpose of synthetic data is to reproduce some structural and statistical properties of real data, but at the same time to overcome the challenges and risks of real-world data when this latter is sensitive or biased.Footnote 72

Synthetic data generation can be a potential solution to access high-quality data for training AI models, and can address (a) privacy and (b) equality challenges of datasets and models. Indeed, synthetically generated data (a) can be used to train AI models when real data are sensitive and their use is restricted and protected by privacy regulations (e.g., GDPR), and (b) can address imbalanced training datasets, mitigating the risk of amplification of biases (e.g., historical biases on gender or race) and improving the diversity of training data.Footnote 73 In sum, synthetic data generation promises to support data sharing and strategies for data debiasing in contexts where data protection is required.

However, synthetic data generation and use is currently a novel and understudied topic, and while promising in HRM, it still poses data protection and data debiasing challenges. Studies have demonstrated that synthetic data often builds on real datasets, and thereby still poses information disclosure risks that could potentially lead to ‘re-identification’, i.e., the likelihood that, notwithstanding anonymisation techniques, it may be still possible to identify individuals and reveal possibly sensitive information, transforming non-personal data into personal information—and this creates legal uncertainty and practical difficulties for the generation and processing of synthetic data.Footnote 74

In recruitment, synthetic data have been studied for bias mitigation purposes that aim to cover properties in training datasets that are not sufficiently represented in real datasets, and this point is especially relevant for avoiding discriminatory outcomes.Footnote 75 However, if the mitigation of bias during the synthetic data generation process is not deployed responsibly, the use of synthetic data can also lead to the amplification and propagation of existing bias. For example, the potential of synthetic datasets of resumes to train models is strictly dependent on the well-curated collection and processing of high-quality resume data that does not under-represent groups that are historically discriminated against, and on the introduction of as much variability as possible in the synthetic resumes generation in order to achieve diversity while preserving privacy.Footnote 76 Synthetic datasets may overgeneralize or overfit their results, i.e., introduce loss of details and information or introduce errors, by fitting the models too tightly to the available data and worsening the issue of bias they are meant to address.Footnote 77

It is important to note that the adoption of a technical safeguard like synthetic data, notwithstanding being a potential privacy and equality-preserving strategy, does not structurally remove the need to use sensitive data, as these data are essential for the development and validation of bias detection models, and must still be collected to create synthetic datasets.Footnote 78

D. Workers’ Voice

The challenge to ensure that the use of AHRM is consistent with workers’ human rights might entail other more suitable regulatory instruments, like methods for collective bargaining.Footnote 79 As in the case of Deliveroo in Bologna, collective bargaining and trade union actions can offer an effective way to start negotiations about workers’ interests and a whole range of other conditions in algorithmic management, like transparency requirements from companies about workers’ data use, storage and management.Footnote 80 This can also help in the development of fair data processing in the context of AHRM, and of appropriate safeguards that ensure that workers can receive meaningful information on how employers use their data, and about the logic involved in the algorithmic decision-making that uses and processes that data.Footnote 81

One relevant and ground-breaking advancement in this direction was recently put forward in April 2024, when the EU’s institutions reached an agreement to adopt the European Directive on Platform Work, which aims to regulate, for the first time in the EU, the use of algorithms in the workplace.Footnote 82 Relevant to our discourse, in particular, is the third chapter of the draft Directive that concerns algorithmic management. The third chapter affirms that workers will receive information from platforms about the algorithmic systems used to hire them (e.g., about actions and parameters that such systems implement and use for their automated decisions) and that they will be able to request and obtain an explanation about—and request a human review of—relevant decisions made through algorithms that affect their working conditions and might infringe on their rights.Footnote 83 Moreover, among the provisions of the Directive, there is one concerning consultation, according to which platforms will be obliged to inform and consult with representatives of platform workers and workers themselves on algorithmic management decisions.Footnote 84 Therefore, the Directive introduces the role of collective actors and collective rights for information and consultation duties in the use of AHRM.

Notwithstanding its relevant proposals, the Directive could have been strengthened by introducing an obligation of collective bargaining or veto powers by workers’ representatives on employers’ introduction of algorithmic management tools.Footnote 85 And, the AI Act, while it provides specific requirements of transparency and self-assessment for certain AI systems, does not provide safeguards for respecting the working conditions of people directly affected by the use of AI.Footnote 86 Also, the GDPR, while it grants rights to workers regarding the processing of their personal data, does not encompass important collective aspects inherent in labour law, including the role of workers’ representatives, and information and consultation of workers.Footnote 87

Shortcomings of the current proposals stem from their ‘techno-deterministic’ and technocentric approach, which ignores the role of social partners in regulating the introduction and operation of AHRM and considers these tools as a given and not subject to open-ended negotiation and social dialogue.Footnote 88 Instead, measures like collective bargaining and transparency requirements can be considered as ways to protect and strengthen workers’ voices: their ability to have a meaningful influence on how work and livelihood are arranged, and to raise concerns within an organization can improve management practices and policies.Footnote 89

However, ex post trade union actions like in the case of Deliveroo in Bologna are not sufficient to grant and secure workers’ fundamental rights. To foster social dialogue, it is fundamental to implement ex ante strategies that can ensure that systems align with workers’ interests, values and needs.Footnote 90 Beyond the recourse to technical solutions and the adherence to legal regulations, the challenge of a novel and algorithmic form of HRM requires organizational and behavioural solutions, which can complement respect for data protection and equality with methods for stakeholder engagement.Footnote 91

Also, the integration of AHRM into organizations should entail the identification of precise and detailed decision-making processes for taking into account workers’ rights in HR processes, by appointing departments and/or individuals—internal or independent of companies, e.g., external watchdogs as auditors, NGOs, certifying entities, trusted third parties (TTPs)Footnote 92—in charge of the monitoring and evaluation of systems, and by selecting special procedures for reporting complaints and concerns, and for receiving explanations and tailored responses, and/or compensations.Footnote 93 In addition to those points, the introduction of AHRM also opens up the need for specialized training for workers, or for the creation of new organizational roles like that of the ‘HR Algorithm Auditor’, whose role would be to ‘rehumanize’ management decisions and ensure transparency and reliability of algorithms, especially in cases where decisions about employees’ future are taken.Footnote 94

Studies have demonstrated that AHRM can provide value and foster autonomy for workers with forms of ‘algoactivism’—novel organizational and behavioural tactics that workers deploy to monitor their workspaces and prevent employers from gaining excessive managerial control of their labour and labour processes.Footnote 95 Algoactivism tactics can be realized through the means of non-cooperation and data obfuscation, e.g., ignoring algorithmic recommendations or finding ways to understand how AHRM tools classify and assign tasks or parameters; bypassing algorithmic decisions with the implementation of behavioural patterns that prevent workers from being assigned undesirable/not-suitable tasks and decisions.Footnote 96

But beyond being a means of resistance, AHRM can also be an organizational and proactive resource that benefits workers. It can maximize workers’ capabilities to save time and resources and can allow purposeful forms of re-design and co-creation, which can gradually change and shift existing norms and rules embedded in traditional HRM practices towards organizational practices that reinforce workers’ autonomy.Footnote 97

V. Conclusions

Given the transformative impact of AHRM, organizations must navigate the balance between the challenges and advantages of this novel management practice. Hiring the right (that is, skilled, experienced and well-trained) people is vital for the organization’s success and competitive role in the market.Footnote 98 However, as discussed in the paper, the introduction of algorithms into the process of recruitment opens a range of questions and concerns related to the protection of workers’ rights to equality, privacy and their right to work. We make our case that we need to go beyond the analysis of the utility gained, in terms of adherence to economic productivity and business accuracy, and also beyond a matter of technical and legal compliance.

An excessive technocentric and legal emphasis on the strategies and safeguards to adopt against the risks of AHRM can obfuscate the fact that equality, privacy and work infringements cannot be adequately addressed without reference to the context in which algorithmic HR decision-making takes place. This context involves different stakeholders (e.g., policymakers, organizations, HR, workers and workers’ representatives) and their respective values, organizational and institutional changes and different sociotechnical challenges and dynamics.

Yet, there is no principled and effective best practice to ensure that organizations holistically address the challenge of mitigating algorithmic discrimination and privacy violations in the workplace and develop frameworks for responsible AI and data governance to protect workers’ rights. In the case of the right to equality, we discussed how debiasing methods must be integrated with considerations about the complexity, context-sensitive nature and socio-cultural relevance of data and discrimination. In the same way, for the right to privacy, we analysed how the introduction of algorithmic decision-making in workplace management and evaluation opens up the need to adopt a more holistic conception of privacy, which might allow us to reconsider how individual and group identities are understood as and through data and context.

However, the right to equality and the right to privacy are not the only human rights at risk, as AHRM can impact the ability of workers to access other fundamental rights and make sense of their individual and collective workplace practices. This collective dimension of human rights is still an understudied topic in the literature on the impact of AI on the quality of work.Footnote 99 Beyond the recourse to technical solutions and the adherence to legal regulations, we argue that the challenge of a novel and algorithmic form of HRM requires developing data ecosystems and collective organizational and behavioural solutions that can complement data protection and equality with methods for stakeholder engagement and accountability governance structures. The responsible governance of AHRM requires appropriate ethical safeguards driven by fundamental rights, but also ways to translate these rights into ethical AI practices that are contextually and institutionally grounded, and into work environments and organizational cultures that are responsive to the values of workers as these emerge across various sectors and diverse and real-world scenarios.

Data availability statement

Data sharing does not apply to this article as no datasets were generated or analysed during the current study.

Contributions

Marianna Capasso and Payal Arora developed the concept of the article, and Marianna Capasso took the lead in writing it. Deepshikha Sharma contributed to the section on the right to privacy, and Celeste Tacconi contributed to the section on the right to equality. Payal Arora has performed multiple integrations and revisions of the draft.

Financial support

The research of Marianna Capasso and Payal Arora reported in this work is part of the FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) project that received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101070212. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

Competing interest

The authors declare no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

References

1 World Economic Forum, ‘Human-Centred Artificial Intelligence for Human Resources: A Toolkit for Human Resources Professionals’, https://www3.weforum.org/docs/WEF_Human_Centred_Artificial_Intelligence_for_Human_Resources_2021.pdf (accessed 25 July 2024) 3.

2 Sivathanu, Brijesh and Pillai, Rajasshrie, ‘Smart HR 4.0 – How Industry 4.0 Is Disrupting HR’ (2018) 26:4 Human Resource Management International Digest 711 CrossRefGoogle Scholar.

3 Meijerink, Jeroen et al, ‘Algorithmic Human Resource Management: Synthesising Developments and Cross-disciplinary Insights on Digital HR’ (2021) 32:12 The International Journal of Human Resource Management 2547 CrossRefGoogle Scholar. As a relatively new term phenomenon, AHRM has not yet been unanimously defined, but generally speaking, it is defined as involving partial or full automation of managerial decision-making across organizational functions achieved through machine-learning algorithms. Scholars also distinguish augmented HR decision-making from automated HR decision-making, as the former revolves around descriptive, diagnostic and predictive analytics with managers in the decision loop, and utilizes ‘algorithmic AI’ (machine learning, deep learning and big data) that makes decisions from existing data. For a discussion on this point see Cameron, Roslyn, Herrmann, Heinz and Nankervis, Alan, ‘Mapping the evolution of algorithmic HRM (AHRM): a multidisciplinary synthesis’ (2024) 11:303 Humanit Soc Sci Commun. CrossRefGoogle Scholar

4 Palos-Sánchez, Pedro R et al, ‘Artificial Intelligence and Human Resources Management: A Bibliometric Analysis’ (2022) 36:1 Applied Artificial Intelligence 128 CrossRefGoogle Scholar.

5 For an overview see Alessandro Fabris et al, ‘Fairness and Bias in Algorithmic Hiring: a Multidisciplinary Survey’(2024) ACM Trans. Intell. Syst. Technol.

6 Köchling, Alina and Wehner, Marius C, ‘Discriminated by an Algorithm: A Systematic Review of Discrimination and Fairness by Algorithmic Decision-making in the Context of HR Recruitment and HR Development’ (2020) 13 Bus Res 795848.CrossRefGoogle Scholar

7 Jeffrey Dastin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, Reuters (11 October 2018).

8 Rodgers, Waymond et al, ‘An Artificial Intelligence Algorithmic Approach to Ethical Decision-making in Human Resource Management Processes’ (2023) 33:1 Human Resource Management Review, 100925 CrossRefGoogle Scholar.

9 Abraha, Halefom, ‘Regulating Algorithmic Employment Decisions Through Data Protection Law’ (2023) 14:2 European Labour Law Journal, 172–91CrossRefGoogle Scholar.

10 European Parliament and of the Council Regulation 2024/1689 2024 (EU & EC) laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, European Parliament (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance. Annex III, 4(a).

11 Ibid, Recital 53.

12 Boxall, Peter, Purcell, John and Wright, Patrick M, ‘Human Resource Management: Scope, Analysis, and Significance’ in Boxall, Peter, Purcell, John, and Wright, Patrick M. (eds.), The Oxford Handbook of Human Resource Management (Oxford: Oxford University Press, 2008) 1 CrossRefGoogle Scholar.

13 De Stefano, Valerio and Wouters, Mathias, AI and Digital Tools in Workplace Management and Evaluation: An Assessment of the EU’s Legal Framework, Report for STOA | Panel for the Future of Science and Technology, (Brussels: European Parliament, 2022) 6 Google Scholar.

14 Hays-Thomas, Rosemary, ‘Diversity, Equity, Inclusion, and the Law’ in Hays-Thomas, Rosemary (ed.) Managing Workplace Diversity, Equity, and Inclusion, (New York-London: Routledge, 2022)CrossRefGoogle Scholar.

15 The Charter of Fundamental Rights of the European Union (CFREU), art 21 - Non-discrimination. With the term ‘special categories of data’ we refer here to the General Data Protection Regulation (GDPR) art 9. On the overlap between protected characteristics according to the EU Directives and special categories of data under GDPR see van Bekkum, Marvin and Borgesius, Frederik Z, ‘Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?’, (2023) 48 Computer Law & Security Review 105770 CrossRefGoogle Scholar.

16 United Nations High Commissioner for Human Rights (OHCHR), ‘The Right to Privacy in the Digital Age’ A/HRC/48/31, para 4; 38 (2021).

17 Johann D Gaebler et al, ‘Auditing the Use of Language Models to Guide Hiring Decisions’ (2024) arXiv 2404.03086v1.

18 Dastin, note 7.

19 Borgesius, Frederik Zuiderveen, Discrimination, Artificial Intelligence, and Algorithmic Decision-making. (Brussels: Council of Europe, 2018) 1014 Google Scholar.

20 Ibid.

21 Prince, Anya and Schwarcz, Daniel, ‘Proxy Discrimination in the Age of Artificial Intelligence and Big Data’ (2020) 105 Iowa Law Review 1257 Google Scholar.

22 Barocas, Solon and Selbst, Andrew D, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671 Google Scholar.

23 Kim, Pauline T, ‘Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action’ (2022) 110 California Law Review, 1539–96Google Scholar.

24 European Council Directive 2000/43/EC (EC), art 2(2a).

25 But an organization, even without realizing cases of indirect discrimination, is always considered liable. However, for indirect cases of discrimination, there is an open-ended exception related to an objective justification. This discussion is outside the scope of the paper, but on these two points and, in general, on AI systems discriminatory effects in the context of EU non-discrimination law see Bekkum, note 15.

26 Prince, note 21.

27 Kim, Pauline T, ‘Data-driven Discrimination at Work’, (2017) 3 William and Mary Law Review, 865 Google Scholar.

28 d’Alessandro, Brian, O’Neil, Cathy, and LaGatta, Tom, ‘Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification’ (2017) 5:2 Big Data, 120–34CrossRefGoogle ScholarPubMed.

29 Dastin, note 7.

30 Peters, Uwe, ‘Algorithmic Political Bias in Artificial Intelligence Systems’ (2022) 35 Philosophy & Technology 25.CrossRefGoogle ScholarPubMed

31 Fabris, note 5.

32 Jie L Lin, Tracing Bias Transfer Between Employment Discrimination and Algorithmic Hiring with Migrant Tech Workers in Berlin (Berlin: FINDHR Expert Reports, 2023)Google Scholar.

33 Balayan, Agathe and Gürses, Seda, ‘Beyond Debiasing: Regulating AI and Its Inequalities’ (Delft: Delft University of Technology, 2020) 22.Google Scholar

34 Köchling, note 6.

35 Zhisheng, Chen, ‘Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices’ (2023) 10:1 Palgrave Communications, 112 Google Scholar.

36 Balayan, note 33.

37 On this point, see para 4.

38 Fabris, note 5, 23–4. On this point, see para 4.

39 Ruberg, Bonnie and Ruelos, Spencer, ‘Data for Queer Lives: How LGBTQ Gender and Sexuality Identities Challenge Norms of Demographics’ (2020) 7:1 Big Data & Society CrossRefGoogle Scholar.

40 Diggelmann, Oliver and Cleis, Marie N, ‘How the Right to Privacy Became a Human Right’ (2014) 14:3 Human Rights Law Review, 441–58CrossRefGoogle Scholar.

41 Ifeoma Ajunwa, ‘The “Black Box” at Work’ (2020) 7:2 Big Data and Society. There are different conceptions of privacy, but in human rights law, the right to privacy has been recognized in the Universal Declaration of Human Rights (UDHR) art 12 and International Covenant on Civil and Political Rights (ICCPR) art 17 and, within Europe, in European Union Charter of Fundamental Rights (EU) art 7 and European Convention on Human Rights (ECHR) art 8. As an extension to the right to privacy, ‘data protection’ was developed, which refers to protection from infringements on people’s privacy rights, and the European Union introduced rules specific to data protection in the GDRP, which has been regarded as one of the most influential data protection legislation worldwide. On the right to privacy and data protection in the age of AI see Zornetta, Alessia and Cofone, Ignacio, ‘Artificial Intelligence and the Right to Privacy’, in Temperman, Jeroen and Quintavalla, Alberto (eds.), Artificial Intelligence and Human Rights (Oxford: Oxford University Press, 2023) 121–35CrossRefGoogle Scholar.

42 This is the definition of personal data given in GDPR, art 4.

43 GDPR, art 5. On explicit consent to the processing of personal data, see GDPR, art. 9, para 2 (a).

44 On this point see Kullman, Miriam, ‘Discriminating Job Applicants Through Algorithmic Decision-Making’ (2019) 68:1 Ars Aequi, 4553 Google Scholar; Hunkenschroer, Anna L and Luetge, Christoph, ‘Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda’ (2022) 178:4 Journal of Business Ethics, 9771007 CrossRefGoogle Scholar.

45 Yam, Josephine and Skorburg, Joshua A, ‘From Human Resources to Human Rights: Impact Assessments for Hiring Algorithms’ (2021) 23:4 Ethics and Information Technology 611–23CrossRefGoogle Scholar.

46 Ben Dattner et al, ‘The Legal and Ethical Implications of Using AI in Hiring,’ Harvard Business Review (25 April 2019).

47 Tilmes, Nicholas, ‘Disability, Fairness, and Algorithmic Bias in AI Recruitment’ (2022) 24:2 Ethics and Information Technology Google Scholar.

48 Ajunwa, note 41.

49 Gstrein, Oskar J and Beaulieu, Anne, ‘How to Protect Privacy in a Datafied Society? A Presentation of Multiple Legal and Conceptual Approaches’ (2022) 35:1 Philosophy & Technology 35 CrossRefGoogle Scholar.

50 On shared ‘behavioural identity tokens’, see Mittelstadt, Brent, ‘From Individual to Group Privacy in Big Data Analytics’ (2017) 30:4 Philosophy & Technology 475–94Google Scholar.

51 Gerards, Janneke and Borgesius, Frederik Zuiderveen, ‘Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence’ (2022) 20:1 Colorado Technology Law Journal, 155 Google Scholar.

52 The GDPR explicitly contains a prohibition to use special categories of data. See GDPR (EU), art 9(1). For a detailed analysis on this point, see para 3.2.

53 On how data protection in data-driven societies involves and must address not just the individual but more fundamentally the group level, see Taylor, Linett, Floridi, Luciano, Sloot, Bart, ‘Group Privacy: New Challenges of Data Technologies’ (Cham: Springer International Publishing, 2017)CrossRefGoogle Scholar.

54 Dixson-Declève, Sandrine et al, ‘Industry 5.0 and the Future of Work: Making Europe the Centre of Gravity for Future Good-Quality Jobs’ (Brussels: Publications Office of the European Union, 2023)Google Scholar.

55 European Parliament and of the Council Regulation (EU & EC) 2024/1689 of the of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance. Annex III, 4(a).

56 Ibid, Recital 53.

57 Filcams CGIL Cologna, Nidil CGIL Bologna, and Filt CGIL Bologna v Deliveroo Italia S.R.L. Trib. Bologna, Ord. no 2949/2019 (2020). For a commentary on the case, see Borzaga, Matteo and Mazzetti, Michele, ‘Discriminazioni algoritmiche e tutela dei lavoratori: riflessioni a partire dall’Ordinanza del Tribunale di Bologna del 31 dicembre 2020’ (2022) 1 BioLaw Journal, 225–50Google Scholar.

58 Angela Chen and Karen Hao, ‘Emotion AI researchers say overblown claims give their work a bad name’, MIT Technology Review (February 14, 2020); Drew Harvell, ‘A face-scanning algorithm increasingly decides whether you deserve the job’, The Washington Post, (6 November 6 2019), https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/ (accessed 26 June 2024).

59 About the experiment, see BR24, ‘Objective or Biased: On the Questionable Use of Artificial Intelligence for Job Applications’ BR24 (nd.), https://interaktiv.br.de/ki-bewerbung/en/ (accessed 26 June 2024).

60 The right to freedom of expression has been predominantly analysed in the context of online platforms and social media in the human rights literature so far, see De Gregorio, Giovanni and Dunn, Pietro, ‘Artificial Intelligence and Freedom of Expression’ in Temperman, Jeroen and Quintavalla, Alberto (eds.), Artificial Intelligence and Human Rights, (Oxford: Oxford University Press, 2023) 7690 CrossRefGoogle Scholar.

61 The point that AHRM can pose a significant risk to human rights at work beyond the threats to the rights to equality and privacy—that have been so far the dominant focus of the human rights literature on this topic—has also been recently recognized by Atkinson, Joe and Collins, Philippa, ‘Artificial Intelligence and Human Rights at Work’ in Temperman, Jeroen and Quintavalla, Alberto (eds.), Artificial Intelligence and Human Rights, (Oxford: Oxford University Press, 2023) 371–85.CrossRefGoogle Scholar

62 For a discussion on this point see Stefano, note 13, 54.

63 GDPR, art 35 (3) (a).

64 Atkinson, note 61, 383.

65 Balayan, note 33.

66 GDPR, art 9(1).

67 GDPR, art 9(2).

68 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance. Chapter III, sec II, art 10, 5. On the challenges that may rise with this new exception to the GDPR, see Bekkum, note 15.

69 Ibid, Chapter III, sec II, art 10, 5 (a).

70 James Jordon et al, ‘Synthetic Data—What, Why and How?’ (2022) arXiv:2205.03257.

71 De Wilde, Philippe et al, ‘Recommendations on the Use of Synthetic Data to Train AI Models’ (Tokyo: United Nations University, 2024)Google Scholar.

72 Jordon, note 70.

73 De Wilde, note 71.

74 Beduschi, Ana, ‘Synthetic Data Protection: Towards a Paradigm Change in Data Regulation?’ (2024) 11:1 Big Data & Society Google Scholar.

75 Sarah-Jane V Els S, David Graus and Emma Beauxis-Aussalet, ‘Improving Fairness Assessments with Synthetic Data: A Practical Use Case with a Recommender System for Human Resources’ (2024) CompJobs ’22: The First International Workshop on Computational Jobs Marketplace.

76 Jorge Saldivar, Alessandro Fabris and Carlos Castillo, ‘Towards a Synthetic Dataset for Anti-discrimination Algorithmic Hiring’, AIMMES 2024 Workshop on AI Bias: Measurements, Mitigation, Explanation Strategies on 20 March 2024 (Amsterdam).

77 Offenhuber, Dietmar, ‘Shapes and Frictions of Synthetic Data’ (2024) 11:2 Big Data & Society Google Scholar.

78 Sebastian Berendsen and Emma Beauxis-Aussalet, ‘Fairness Vs Privacy. Sensitive Data is needed for bias detection’ (14 March 2024), https://www.eur.nl/en/news/fairness-versus-privacy-sensitive-data-needed-bias-detection (accessed 26 June 2024).

79 The point of collective bargaining and agreement in the context of algorithmic management is sustained and discussed in detail in De Stefano, Valerio and Taes, Simon, ‘Algorithmic management and collective bargaining’, (2023) 29:1 Transfer: European Review of Labour and Research, 2136 CrossRefGoogle Scholar.

80 Ibid, 28.

81 GDPR art. 22 and certain provisions of GDPR art. 13–15 provide rights to meaningful information about the logic involved in automated decisions. See Selbst, Andrew D and Powles, Julia, ‘Meaningful Information and the Right to Explanation’ (2017) 7:4 International Data Privacy Law, 233–42CrossRefGoogle Scholar.

82 Arianne Sikken, ‘Parliament adopts Platform Work Directive’, Press Release European Parliament (24 April 2024) https://www.europarl.europa.eu/news/en/press-room/20240419IPR20584/parliament-adopts-platform-work-directive (accessed 18 October 2024).

83 Proposal for a Directive of the European Parliament and of the Council COM/2021/762 final (EU & EC), 16–17; Chap. III, 35.

84 Ibid, Chapter III, art 9, 38.

85 Valerio De Stefano, ‘It Takes Three to Tango in the EU: The New European Directive on Platform Work’ The Law of Work (13 March 2024) https://lawofwork.ca/it-takes-three-to-tango-in-the-eu-the-new-european-directive-on-platform-work/ (accessed 18 October 2024).

86 Proposal for a Directive, note 83, 7–8.

87 Ibid. Even if the relevance of collective agreements in the governance of algorithmic decision-making is recognized in art 88 of the GDPR. See on this point De Stefano and Taes, note 79.

88 De Stefano, Valerio, ‘The EU Commission’s proposal for a Directive on Platform Work: an overview’ (2022) 1 Italian Labour Law e-Journal, 15 Google Scholar.

89 On the ideal of voice in the workplace, see Hirschman, Albert O, Exit, , Voice and Loyalty: Responses to Decline in Firms, Organisations and States (Cambridge: Harvard University Press, 1970) 30 Google Scholar. See also Collins, Philippa and Atkinson, Joe,‘Worker voice and algorithmic management in post-Brexit Britain’ (2023) 29:37 Transfer: European Review of Labour and Research Google Scholar.

90 On co-determination see Stefano, note 79; on design strategies see de Sio, Filippo Santoni, Human Freedom in the Age of AI (London and Network: Routledge, 2024)CrossRefGoogle Scholar, Chapter 4, in particular Social/Participatory Design at the Workplace, 219.

91 This is what Loi calls ‘GDPR+’, meaning go beyond the requirements of GDPR to ensure an ethical approach for HR analytics; see Loi, Michele, ‘People Analytics Must Benefit the People. An Ethical Analysis of Data-driven Algorithmic Systems in Human Resource Management’ (Berlin: AW AlgorithmicWatch gGmbH, 2020)Google Scholar.

92 On external watchdogs see Ibid, 29; and on ex ante assessment by third parties instead of a self-assessment see De Stefano, note 13, 54.

93 Stiller, Sebastian, Jule, Jäger and Gießler, Sebastian, ‘Automated Decisions and Artificial Intelligence in Human Resource Management’ (Berlin: AW AlgorithmicWatch gGmbH, 2021) 8Google Scholar.

94 Sienkiewicz, Łukasz, ‘Algorithmic Human Resource Management – Perspective and Challenges’, (2021) 55:2 Annales Universitatis Marie Curie Sklodowska Google Scholar.

95 Kellogg, Katherine C, Valentine, Melissa A and Christin, Angèle, ‘Algorithms at Work. ‘The New Contested Terrain of Control’, (2020), 14 ANNALS, 366410 Google Scholar.

96 Meijerink, Jeroen and Bondarouk, Tanya, ‘The Duality of Algorithmic Management: Toward a Research Agenda on HRM Algorithms, Autonomy and Value Creation’ (2023) 33:1 Human Resource Management Review CrossRefGoogle Scholar. See also Bronowicka, Joanna and Ivanova, Mirela, ‘Resisting the Algorithmic Boss: Guessing, Gaming, Reframing and Contesting Rules in App-Based Management’, in Moore, Phoebe and Woodcock, Jamie (eds.) Augmented Exploitation: Artificial Intelligence, Automation, and Work (London: Pluto Press, 2021) 149–61CrossRefGoogle Scholar.

97 Ibid.

98 Chang, Kirk et al, ‘Digitalisation of Personnel Recruitment and Selection’ in Toyin A Adisa (eds.) HRM 5.0 (Cham: Palgrave Macmillan, 2024) 87111 Google Scholar.

99 De Stefano, note 79.