10.1 Introduction: Remote Work between Digital and Algorithmic Law
Remote work is digital. Although traditional home-based work appeared well before the spread of digital technology, remote work in the way we perceive it in the twenty-first century developed in this technological context. It was digital technology that allowed remote work to develop and to become a widely prevalent form of employment. These media are extensively used not only to carry out the work but also to control it, making the teleworker a person who is especially monitored and supervised. The teleworker is a unique and distinctive employee, primarily due to their high dependence on technology.
Not only this. We could say that remote work through electronic devices, a category known as telework in Spanish law and other legal systems, is the quintessential manifestation of digital work, one whose very existence directly depends on the availability of these devices. It is technology that distinguishes it from other current forms of employment as well as from home-based work of past eras.
Remote work as we know it today became widespread in the twenty-first century due to technical developments, changes in the workforce, and the impact of the COVID-19 pandemic. In these early decades of the century, it is coexisting with another phenomenon that is shaping and characterizing social and economic evolution: the development of artificial intelligence (AI) systems, based on the use of algorithms for decision-making and the generation of texts, images, and content.
The Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (the so-called Artificial Intelligence Act)Footnote 1 defines “artificial intelligence system” as any “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” These programs can be included in the decision-making process of companies (for example, to decide on hiring, firing, or rewarding; or be used in the development of the company’s own activities, being available to its staff). In either case, they are already widespread and affecting the way people approach their work experience.
Remote work and AI have emerged simultaneously as two current manifestations of changes driven by rapid scientific advancements. Although they operate on different planes – remote work being a “physical” phenomenon as it pertains to the manner of conducting activities, and AI being “virtual” as it involves cutting-edge software – there are significant areas of intersection. Algorithmic management systems undoubtedly influence both the execution of telework and its prevalence in the market, potentially promoting and incentivizing it. In this context, and using a concept prevalent in AI studies, this form of employment is “highly exposed to AI,” with substantial changes anticipated as these systems evolve and proliferate.
The AI revolution is being accompanied by the development of regulations aimed at organizing both its development and use, according to standards set by states and based on perceived or confirmed risks (Eurofound, 2023). The goal, as noted by the European Commission, is the ethical development of AI, which has led to the involvement of national and supranational legislators. Or, as the G7 Labour and Employment Ministers declared recently, a “human-centred approach in seizing the opportunities and addressing the risks to the world of work.”Footnote 2 Their regulations will also apply, as appropriate, to remote work, to the extent that such systems are used in it, as is foreseeable.
The starting hypothesis of this chapter is that a new branch is emerging in contemporary legal systems: Artificial Intelligence Law. This branch has a clear labor area, as these systems impact both work and the effectiveness of the regulations designed to govern it according to standards of rights, dignity, and justice. It has emerged as a specialization of Digital Labor Law, a spin-off, so to speak, sharing a common genetic code with it.
Remote work, when incorporating AI systems in its execution, is governed by this set of regulations, which are superimposed on the specific rules pertaining to this form of employment within digital labor law.
10.2 Remote Work under Digital Law
As previously mentioned, remote work originated in a pre-electronic context, developed as analogical work and widespread as digital. The current regulations governing remote work are a central component of digital labor law, representing its most developed area and encompassing all relevant aspects of this regulation. However, this has not always been the case.
We can identify three generations in the legal regulation of remote work. Initially, outdated regulations on home-based work were generally applied. These regulations were designed in a completely different context and were intended more for domestic workshops than for services. The clear shortcomings of these solutions, which were not only obsolete but also brief and insufficient, required corrections and supplementation. This led to the emergence of primitive forms of collective regulation, such as the telework agreements signed in some companies to initiate the first experiences with this form of employment. Telework agreements became common in firms conducting pilot projects with this technical innovation, often being the only regulation available.
The generalization of this employment form rendered the situation both unsatisfactory and unsustainable, prompting the emergence of a second generation of regulations. At the EU level, the Framework Agreement on Telework (2002) played a pivotal role in both promoting this shift in the regulatory model and in disseminating a regulatory content model that national laws gradually adoptedFootnote 3. The legislation of this generation had a clear objective of protecting teleworkers and adopted a “contractual” approach, focusing on the balance of benefits between the parties affected by the fact that work is carried out away from the employer’s premises on a regular basis using information technology (following the Framework Agreement’s definition of telework).
Second-generation legislation emphasizes the voluntary nature of telework, guarantees the right to return to traditional employment, and ensures a general right to equal treatment. These regulations primarily focus on protecting privacy and personal data, along with fundamental issues related to work equipment, liability, and costs, rather than extensively addressing technical matters. Typically, this legislation is presented as a monographic regulation, which is not overly extensive, often found within a specific piece of legislation or, more commonly, in a dedicated article or section. In the Spanish context, this regulatory phase, although somewhat delayed, aligns with the drafting of Article 13 of the Revised Text of the Workers’ Statute, introduced by the 2012 labor law reform.
Later developments have radically transformed the legal approach to remote work. This shift is the result of a confluence of various factors, leading to a growing number of remote workers. Third-generation laws are significantly more comprehensive than their predecessors, typically taking the form of extensive, monographic pieces of legislation, often as stand-alone bills outside general labor codes or employment contract legislation. In terms of content, they build upon previous generation laws, ensuring contractual protections in areas such as voluntariness, reversibility, equal treatment, and compensation for expenses. Beyond these, legislative authorities have introduced a range of additional provisions, rights, and guarantees.
Substantial improvements are evident in two key areas: the recognition of rights for remote workers, which now extends well beyond the general principle of equal treatment found in previous regulations; and the attention given to technological issues, including surveillance devices, computers, communication tools, software, IT security, and BYOD (Bring Your Own Device).
In general terms, we can affirm that remote work is currently a highly regulated reality, with regulations increasingly oriented towards the protection of rights and a high degree of sophistication in addressing technical aspects. This fact is particularly relevant when dealing with AI, as it is generally easier to construct a normative framework for AI by building on existing regulations, which can then be adapted to meet the demands imposed by this technological innovationFootnote 4. Regulating the impact of AI on remote work will involve identifying new requirements and assessing the effectiveness of existing rules, a task that, while not easy, is certainly more feasible than developing an entirely new set of regulations.
In the current landscape of digital law, there is a second element that can facilitate matters. In most cases, remote or telework is presented as a distinct form of employment, differentiated from other forms of dependent work, and recognized as an autonomous category with its own designation.
Related to this, third-generation laws tend to establish a comprehensive statute for these workers, addressing most aspects of their employment relationship. This approach also facilitates the introduction of new, specific legal solutions to potential issues arising from the impact of AI systems. These solutions will apply exclusively to remote workers tout court, those for whom remote working is so integral that it defines their status within the legal framework.
The third generation of remote working regulations has emerged in several European countries. According to a study commissioned by the European Union, Austria, Spain, Greece, Latvia, Portugal, Romania, and Slovakia have adopted new legislation since the beginning of the pandemic.Footnote 5
However, this example could be extended across the entire European Union, as the European Commission has expressed interest in developing a standard on this matter. In fact, it launched the first-stage consultation of European social partners, in accordance with Article 154(2) of the Treaty on the Functioning of the European Union, to gather their views on the possible direction of EU action on ensuring fair telework and the right to disconnectFootnote 6, following the European Parliament resolution of January 21, 2021, with recommendations to the Commission on the right to disconnectFootnote 7. This initiative also addresses the European cross-industry social partners, who have initiated negotiations to update their 2002 Framework Agreement on telework. Additionally, the Council conclusions on telework, approved by the Council of the European Union (EPSCO) at its meeting held on June 14, 2021, support this initiativeFootnote 8. With this backing it is highly likely that a directive on this matter will be enacted in due course, triggering a wave of reforms in the national labor laws of member states.
10.3 The Construction of AI Law
The starting point of this work is the existence of AI law, with a branch that addresses both the impact of these systems on human resources management and the consequences of their generalization in work environments, as well as their use by competent public authorities in the labor market. This is an emerging body of law that builds on digital law, differentiates itself as a spin-off, and gives rise to second-generation digital rights. Despite the recent nature of this phenomenon, it has already yielded significant resultsFootnote 9. Not only have some far-reaching regulations been adopted, such as the European Data Protection Regulation (insofar as it addresses automated decisions) and notably the European Regulation on Artificial Intelligence, but a substantial array of potentially effective solutions is already available to tackle the issues that the proliferation of such systems in various fields may generate. The rules comprising this body of law apply a toolbox of original solutions, including transparency, accountability, human review, potential for discrimination, degree of exposure, and risk classificationFootnote 10.
This is because this framework has been largely preemptive, initiated when concerns about the potential risks associated with AI first arose, before its widespread use across all domains. This is not to say that AI is not already in use, but rather that, unlike previous technological changes, solutions have been designed in advance, based on reasonable forecasts and expectations, rather than waiting for extensive use to generate problems. The differences observed from this perspective, compared to the regulation of work on digital platforms (to cite just one example), are notable. Regulation of digital platforms was only considered when this reality became a pressing issue in the labor market due to the erosion of labor rights it entailed. The perception is that action was taken reactively, rather than proactively, and that it was delayed until the damage had already been done, leaving platform workers to suffer an unacceptable level of precariousness.
Algorithmic law has not emerged as a “Law of Judges,” unlike digital labor law. Instead, it is an academic and technocratic creation. Developed by experts from various disciplines who recognized its potential impacts, algorithmic law aims to anticipate these effects and facilitate the transition to a new technological environment that respects individuals and their rights. It is not surprising that the European Union has been at the forefront of its development, given its proactive stance on technological innovation.
For example, many rules of digital law codify preexisting solutions developed by international or national courts, notably constitutional courts. This is evident in areas such as privacy rights versus video surveillance or the monitoring of computer equipment. However, this is not the case with algorithmic law.
Another characteristic of algorithmic law is its technical complexity and sophistication. These rules govern intricate and demanding technological realities, developed through a deep understanding of these systems. They go beyond mere statements, general principles, or broad-spectrum solutions. Instead, their responses are concrete, precisely identifying both their factual assumptions and the legal consequences attached to them. They employ a specialized language, generated within the technological field.
There are also regulatory instruments that are not traditional in continental legal systems, such as those manifesting as soft law, including declarations and charters. This is particularly evident in digital law, where we encounter instruments like the European Declaration on Digital Rights and Principles. Additionally, the presence of specialized bodies, such as agencies and working groups, is noteworthy. Although these entities do not possess regulatory powers of their own, they significantly influence the development of this law through the interpretation of its rules and the drafting of guidelines and recommendations for its application.Footnote 11
Algorithmic labor law adopts a multilevel framework, presenting itself as the sum, coexistence, and interaction of norms developed by various entities. This mirrors the broader trends observed in labor law in this century.
To date, the regulations governing the use of AI have primarily been incorporated into specific and monographic texts, rather than within the general frameworks of employment contracts and labor relations. This approach is logical, given their objective of regulating software use and their inherently technical nature. However, there are exceptions to this trend, such as the inclusion of algorithmic information rights in laws concerning worker representation in companies and in those addressing the conclusion and formalization of employment contracts.
Additionally, references to algorithmic instruments can be found in public law regulations that govern administrative intervention in labor relations. These regulations aim to manage various legal relationships arising from the public social protection system and enhance the efficiency of sanctioning law in labor matters. Acts regarding social security, labor inspection, and administrative procedures are beginning to permit and regulate the use of these mechanisms.
A unique aspect of algorithmic labor law is the existence of a dual regulatory framework. Alongside what can be considered “general” regulations applicable to all employment relationships, there is a specific set of rules tailored for a distinct group: platform workers. This form of employment, closely linked to the use of algorithms, has generated numerous problems and abuses, prompting the establishment of rules aimed at preventing such issuesFootnote 12.
These regulations have particularly focused on the qualification of services, an area that general AI standards often overlook. The urgency of addressing these concerns for digital platforms has led to the development of standards that, in many cases, have been pioneering in the regulation of algorithms, effectively serving as a testing ground or sandbox for innovative solutions (Aloisi & Potocka-Sionek, Reference Aloisi and Potocka-Sionek2022).
When progress has been made toward the adoption of general rules, especially at EU level, there has not been a unification of regulations, but they have remained separate. We have an Artificial Intelligence Regulation and a Directive for the improvement of working conditions on digital work platformsFootnote 13, and the same situation is found in other legal systems.
10.4 The Impact of AI on Remote Working
AI is one of the main drivers of the new contemporary digital environment. Although it is still in the early stages of its evolution and we are only just glimpsing what its impact is likely to be, there is no doubt that it is one of the most disruptive contemporary megatrendsFootnote 14.
The introduction and progressive expansion of the use of AI systems in the world of work needs to be seen as part of the broader process of digital technological evolution that has been developing over the last decades. Considered from a diachronic perspective, some debates and issues are common, such as the potential for job creation/destruction or the renewed emphasis on fundamental rights. However, the ability of AI systems to automate decision-making and content generation, as a distinctive feature, poses new challenges.
Features of digital technology in general, and of AI in particular, make it difficult to analyze its impact on remote work. Firstly, its high level of technical complexity and its potential for development and change, which require prior knowledge (and continuous updating) of how it works. Moreover, its ambivalence, understood as the absence of technological determinism, means that its effects depend on its use, and can have either positive or negative implications for both parties in the employment relationshipFootnote 15. On the other hand, its ductile nature allows us to maintain, as a hypothesis, the lack of significant differences in its implementation regardless of the place where the work is performed, whether on-site or remotely. Particularities in the latter case may derive from an eventually greater intensity in its use, considering the physical absence of the workplace and the fact that digital technology is inherent to remote work, which may give rise to AI systems designed especially for these cases. Nevertheless, we can attempt to highlight a few potential areas where the effects of this combination are foreseeableFootnote 16.
Indeed, the algorithmization of remote work can be visible, with varying intensity, in areas such as the generation of remote jobs or their organization and management. The use of AI systems may be particularly beneficial in several aspects where traditional management tools may face challenges, such as the monitoring of work activity, communication and coordination between workers, and the control of compliance with labor legislation by the labor inspectorate; however, this improvement is not without risks, as some potential negative repercussions can also be foreseen.
As noted above, the traditional debate associated with the implementation of new technology in production processes about its capacity to create or destroy jobs can also be applied to AI, focusing mainly on the automation of tasks and the replacement of human labor by robots. One of the distinctive features of AI with respect to previous technologies is that the tasks subject to automation are not restricted to non-cognitive and routine tasks and, given its potential general use, any sector or occupation can be affected (OECD 2023, 14).
Focusing on remote working, digital technology and AI systems allow activities traditionally carried out in person to be performed remotely. AI makes it possible to broaden the spectrum of “teleworkable” jobs, adapting them to this end by providing effective tools to carry out the work, through virtual reality techniques or robot remote controlFootnote 17, among others, and to monitor employees remotelyFootnote 18. Thus, the use of remote working depends to a large degree on the “teleworkability” of a job, that is, the extent to which it is feasible to work remotely. In addition, the development of work for digital platforms must be considered, a form of employment that encompasses a variety of cases that involve the use of an algorithm at some point in the production process.
Similarly, remote work is to digital law what platform work is to algorithmic law: its most striking manifestation. Both modalities require customized solutions due to the intensity of the technological component. In practice, there is a significant overlap between both categories, as a substantial part of platform work is done remotely. This overlap is particularly evident in self-employment, through various forms of service provision outside the traditional employment contract. By facilitating the meeting of supply and demand (acting as marketplaces), standardizing the services exchanged, providing payment security, and enabling crowdworking, these platforms have extended the phenomenon, allowing activities that were previously not ‘teleworkable’ to be performed remotely. In short, the ability of AI to reshape the nature (face-to-face or remote) of employment takes place not only in respect of subordinate employment but also in self-employment.
For these same reasons, digital platforms, which should indeed be labeled “algorithmic,” enhance the transnational offer of remote work and digital nomadism. They make it easier for professionals to choose their place of residence and work, or the country in whose market they offer their servicesFootnote 19.
However, platform work comes at a cost, as it is associated with precariousness, the increase of the employer’s powers and the unilateral determination of working conditionsFootnote 20. It is, therefore, not by chance that the regulation of this form of employment has been a pioneer in the development of AI labor lawFootnote 21.
Work organization is characterized by algorithmic management, which makes it necessary to reexamine core labor law institutions such as working time, wages, health protection, or equality and nondiscrimination (Aloisi & De Stefano, Reference Aloisi and De Stefano2022). Adams-Prassl (Reference Adams-Prassl2022) highlights, among others, the following challenges: the potential lack of transparency of automated decisions, which can make it difficult for workers to understand decisions (in many cases integrated with “traditional” technological systems, such as video surveillance and biometrics); unpredictability or unstable working conditions, resulting from the automation of functions; or the perpetuation or worsening of discriminatory situations, insofar as decisions on labor issues taken by AI systems are a “potential source of discrimination” (Kullmann Reference Kullmann2018), as a consequence of biases in the data and variables used by AI, relying on the difficulties of detection for various reasons (invisibility, technical complexity, mathematical authority of AI systems).
In remote work, AI systems can help to overcome problems arising from the ubiquity of performance. Firstly, organizing the work process with the employer and colleagues, through the tools such as Asana, Trello, or Microsoft Themes that use AI to optimize task management and collaborationFootnote 22, or the implementation of virtual workplaces or “metaverse offices” (Chen, Reference Chen2024). These can also facilitate relations with workers’ representatives, clients, or labor administrations.
The employer can carry out less intrusive monitoring of workers, without physical intrusion by the employer, to verify the fulfillment of employment obligations. This has always been an area plagued of difficulties. Video surveillance is not common in remote work, reasonably, because of its disproportionate impact on the privacy of workers and their families. However, there is a constant transmission of images through the remote communication tools commonly used in telework, especially in recent years, as the availability of these programs has increased, and their costs have decreased.
The integration of video surveillance with AI tools has led to the development of intelligent cameras, which adapt their monitoring based on algorithmic guidance, enabling advanced image analytics. These innovations have enhanced the effectiveness of monitoring, extending its utility to support decision-making and improve workplace environments. Intelligent video surveillance can serve multiple purposes: it can anticipate health issues for workers, verify compliance with occupational risk prevention measures, and monitor actual working hours, benefiting both parties in the employment relationship.
Nonetheless, it is considered a highly intrusive monitoring system, raising significant privacy concerns. One of the many advantages of intelligent cameras, though, is their ability to be adjusted to mitigate this intrusion. This can be achieved by limiting the collection of images, deleting images collected in specific areas or at certain times, or identifying non-employees to exclude them from monitoring.
Besides image-based monitoring, AI can verify workers’ performance from the use of the devices provided to them (by detecting and drawing conclusions from keystrokes, for example). This allows the evaluation of productivity with a high level of objectivity, through ‘digital exhaust’Footnote 23.
AI can also enable the company to fulfill its responsibilities as an employer. One area of particular interest relates to occupational risk preventionFootnote 24. The use of AI can generally improve occupational health and safety monitoring, minimize exposure to some risk factors, and provide early warning of potential stress or fatigueFootnote 25. At the same time, legal and ethical issues may arise that need to be properly analyzed (EU-OSHA, 2021). The EU’s AI Regulation prohibits systems from inferring emotions in the workplace, except for medical or safety reasons. Recital 19 excludes from this prohibition the monitoring of physical states, such as pain or fatigue, including systems used to detect fatigue levels of professional pilots or drivers to prevent accidents. This allows the use of image analysis in the framework of workers’ health and safety policies, as well as for the general physical and mental well-being of the workforce.
There are challenges in remote work, such as physical isolation, and technology can contribute to promoting emotional well-being of workersFootnote 26. AI can not only identify or even anticipate this psychosocial risk factor but also generate interfaces that may help to reduce this feeling by simulating human interaction during certain tasks. It can also be useful to ensure full compliance with the right to digital disconnection by automatically disabling any communication mechanismFootnote 27. However, specific risks related to this technology have been identified, such as technostress and anxiety linked to over-supervision. These risks are also present in remote work, potentially at a higher level.
The potential of AI to enable workers’ representatives, the labor inspectorate, and other authorities to perform their functions and to monitor compliance with labor regulations in the case of remote workers, to detect possible fraud, which is more difficult due to geographical dispersion, may be considered. Some national experiences offer good examples that can be implemented in remote workFootnote 28.
A final impact that should be considered concerns regulatory instruments. As noted above, we are witnessing the emergence of an algorithmic law, with specific rules for the employment sphere. The use of AI systems in remote work implies their consideration in parallel to digital law, which is regulating this form of employment. The normative contents and logics of both legal bodies do not coincide, so it is important to analyze the normative consequences of this legal duality.
10.5 New Rules for a New Remote Work
When remote work is facilitated, managed, or controlled by an AI system, it involves a combination of digital law and algorithmic law mandates, as both sectors of the legal system are applicable. In this section, we will analyze the legal consequences of this regulatory duality, based on a comparative study of the regulations governing this form of employment alongside the rules applicable to AI in the European Union. The idea is to explore the outcome of the combination of both normative areas.
It is important to note that legal systems are already beginning to respond to this convergence. Thus, recent legislation on remote work is increasingly addressing the issue of AI. The Sectoral Agreement on Digitalisation, signed on October 6, 2022, by the European Social Dialogue Committee for Central Government Administrations, encompasses provisions on various matters, including teleworking and AI. In Spain, Article 17.1 of Act No. 10/2021 on remote work contemplates the use of automated devices for monitoring the provision of labor services.
However, as will be discussed, both areas usually fail to converge in the same regulatory body, maintaining regulations with varying logics that interplay and must be considered jointly. Thus, while in telework, as a form of digital employment, the consideration of the digital instrument and the data it handles is crucial, when regulating AI systems, the logic of the potential risk which must be mitigated prevails. Telework regulation has in many cases developed outside the framework of the regulatory bodies governing subordinate employment, and has a different profile due to technological impact, which is visible in some specific aspects, such as the recognition of a broader right to be informed and trained about this essential element of this type of employment. On the other hand, AI Regulation is technical and complex, with efficient and didactic rules for the various actors it addresses. Although it does not consider the remote worker in particular, some of the principles on which it is based (transparency, human control, risk mitigation), are very useful in teleworking.
There is thus a kind of transition between normative paradigms because of the impact of digital technology and AI systems, which is particularly evident in remote work involving AI. On the one hand, from the analogical labor law to the digital labor law, as an expression of the centripetal forces leading to “the de-contractualisation of labour protection” – the expulsion of some types of workers from the common body of law to create new ones (López López, Reference López López and López2014). And on the other hand, from digital labor law to algorithmic labor law, as an expression of a process of “decoding” (Rodríguez-Piñero Royo, Reference Rodríguez-Piñero Royo2024), understood as a trend to relocate labor standards outside their specific regulatory corpus.
As a preliminary step, two determining factors must be identified that condition the application of the AI Regulation. The first involves the inclusion of systems to be used within one of the four categories outlined by the AI Regulation, as this will determine the legal obligations imposed on the companies employing them. According to the recitals of the EU Regulation itself, AI systems used in workforce management should be classified as high-risk, given their potential impact on workers’ rights. Article 6 and Annex II support this categorization. This includes intelligent tools used for decision-making affecting the terms of employment relationships, promotion, and termination of employment contracts, task allocation based on individual behavior, personal traits or characteristics, and the monitoring or evaluation of individuals in employment relationships. AI systems used to monitor employee performance and behavior warrant this classification, as they may also infringe upon fundamental rights to data protection and privacy.
A second issue to be considered is the concept of “publicly accessible space,” defined as “any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of potential capacity restrictions” (Article 3). This notion applies irrespective of whether the space is privately or publicly owned or the activity for which the space may be used. Recital 19 clarifies that company and factory premises, as well as offices and workplaces, are not publicly accessible if they are intended to be accessed only by relevant employees and service providers. The challenge we face is that remote work, by definition, is performed outside the employer’s premises. Nevertheless, since remote work is typically conducted at the workers’ private residences, it is reasonable to exclude these working places from this classification and the associated obligations.
The primary obligation imposed upon employers is to implement appropriate technical and organizational measures to ensure the use of such systems in accordance with the accompanying instructions for use (Article 26.1 EU Regulation).
One of the areas to be explored is worker training, where the juxtaposition of regulatory areas results in the aggregation of differentiated training according to the objective pursued. The Framework Agreement on telework seeks to ensure that teleworkers have access to training, career development opportunities, and performance assessment mechanisms on the same terms as those working on-site. To that end, the employer is required to provide the necessary mechanisms, and to address training in the proper use of the equipment and the features of teleworking. This emphasis on ensuring a level playing field irrespective of the place of service provision is found in the implementation carried out in different countriesFootnote 29. The aim is to ensure that teleworkers are not disadvantaged by teleworking and to recognize their right to participate in ongoing training programs to improve their skills and abilities, in particular to enable them to make appropriate use of the technological tools required for their work.
The AI Act contains some provisions on training for all workers regardless of their place of work. For instance, related to the requirement for transparency and human control of AI systems used in the workplace, it is argued that employees should be informed about their use and adequately trained to interact with them safely and effectively. Specific training is required for workers interacting with high-risk AI systems (e.g., for performance evaluation or recruitment); the aim is to ensure that they have a clear understanding of the risks and any necessary risk mitigation measures. Companies should also provide training to employees on cybersecurity and data protection to ensure compliance with the relevant regulations.
Secondly, it is interesting to explore the information to be provided to workers which, as a consequence of the intersection of different rules, involves extending the catalogue of aspects that need to be known by workers, in particular on working conditions (tools provided, prevention of occupational hazards, automated activity management systems), privacy and data protection, or working tools (risks of the AI systems to be used).
In general terms, the Directive on Transparent and Predictable Working Conditions in the European Union mandates that employers provide workers with clear and comprehensive information regarding their working conditionsFootnote 30. This includes details such as the identity of the parties involved, the workplace location, the job description, the starting date, and, where applicable, the duration of the contract. Building on this foundation, the Framework Agreement outlines several specific aspects that are crucial for this type of employment. Additionally, concerning data protection and worker privacy, employers are required to inform employees about any potential restrictions on the use of equipment and the penalties for noncompliance. The use of digital equipment in teleworking makes it particularly relevant to consider data protection regulations for workers. This leads to consider the provisions of the General Data Protection Regulation (GDPR)Footnote 31, which also includes requirements for companies to provide individual and direct information on different aspects (Article 13)Footnote 32.
The same particular information contained in the Framework Agreement is common to the implementation rules of the European countries, and it is also usual to inform about security measures to be followed to protect sensitive information.
As noted above, the remote provision of work involves significant challenges for the protection of the occupational health of teleworkers. It is therefore also common for information to be provided to teleworkers to include how to maintain a safe working environment, with particular emphasis on ergonomic issues. It is also common to require that the terms and conditions for the development of telework must be set out in written form, including who is responsible for providing and maintaining the equipment for teleworking and the costs involvedFootnote 33.
The AI Regulation imposes information obligations on employers to inform workers using or exposed to high-risk AI systems in the work environment. As an expression of the principle of transparency, they should be informed about the use of AI systems in the company and their interaction with them (their purpose, capabilities, technical characteristics, etc.) and share the results of impact assessments aimed at identifying and mitigating the risks associated with them. In addition, they should also receive information on the human supervision mechanisms in place. Information should also be provided about the handling of their data processed by AI systems and about their rights in relation to the use of AI, particularly the right to an explanation of automated decisions (as a precondition for being able to exercise the right to appeal against such decisions).
Accordingly, the information and participation of workers’ representatives is being extended, and consequently greater demands are being placed on workers’ representatives by the technical complexity of digital tools and AI systemsFootnote 34. Algorithmic labor law seeks to complement the provisions on information and consultation laid down in other rules. The Framework Agreement states that they must be informed and consulted on the implementation of telework in the company (working conditions, data protection, health and safety, etc.), and must receive all the necessary data to be able to assess the impact of telework on working conditions and on employees’ health and safety.
The AI Regulation also includes specific provisions to ensure that workers’ representatives are adequately informed about the use of AI in the workplace, and the exposure of workers to high-risk AI systems, as another expression of the general principle of transparency that underpins the regulation. Thus, the company must inform workers’ representatives about the use of AI systems that may affect working conditions, which are used for employment-related decision-making and performance evaluation, including the algorithms used and the data processedFootnote 35; consultation and participation rights in the implementation of AI systems are also recognized, including the possibility to discuss and negotiate the conditions under which they will be used, with particular emphasis on high-risk AI systems, and human supervisionFootnote 36.
Transparency is a vital mechanism for safeguarding rights in AI contexts. The AI Regulations impose transparency obligations on both providers and deployers, including employers who use AI systems to manage their remote workforce. Article 13 of the EU Regulation specifies that high-risk AI systems must be designed and developed to ensure their operation is sufficiently transparent, enabling deployers to interpret the system’s output and use it appropriatelyFootnote 37. In this context, employers must consider this aspect when acquiring management or intelligent control software for remote workers. They should be fully informed about the software’s capabilities and limitations. Furthermore, it is essential to use these systems in a manner that allows for appropriate traceability and explainability, ensuring that individuals are aware when they are communicating or interacting with an AI system.
Specific obligations arise in certain cases. According to Article 50, if an employer deploys an emotion recognition system or a biometric categorization system, they must inform the individuals affected by the operation of the system. Furthermore, the employer is required to process personal data in compliance with applicable regulations.
A key aspect of algorithmic law is the right to AI literacy, which ensures that individuals have access to the necessary knowledge to make informed decisions regarding these systems. Remote workers affected by such technologies are certainly entitled to this right. The AI Regulation suggests that the widespread implementation of AI literacy measures could significantly enhance working conditions.
Article 4 of the AI Regulation places a responsibility on employers, as deployers of such systems, to ensure that their staff possesses a sufficient level of AI literacy. This entails providing the necessary knowledge to understand how AI-assisted decisions will affect them. This obligation extends beyond the traditional duty of information provision within the employment relationship. Employers must consider their employees’ technical knowledge, experience, education, and training, as well as the context in which the AI system will be utilized and the individuals or groups it will impact.
Additionally, employers should ensure that those tasked with implementing the system’s instructions and overseeing its use have the requisite competence, particularly a suitable level of AI literacy, training, and authority to effectively carry out these responsibilities.
Human oversight is a specific legal instrument within the AI Law framework, functioning both during the design phase and the utilization of high-risk systems. These systems must be designed and developed to allow natural persons to oversee their operation, ensuring they are used as intended and that their impacts are managed throughout the system’s lifecycle (Article 14.1). To achieve this, appropriate human oversight measures should be identified by the system provider before the system is placed on the market or put into service.
According to Article 26, employers utilizing these systems must assign oversight responsibilities to individuals who possess the necessary competence, training, and authority, along with adequate support. This includes ensuring a sufficient level of AI literacy (Recital 93). In certain instances, the system should incorporate mechanisms to guide and inform the responsible person, enabling them to make informed decisions about when and how to intervene to prevent negative consequences or risks, or to halt the system if it fails to perform as intended (Recital 73).
Human oversight aims to prevent or minimize risks to health, safety, or fundamental rights that may arise when a high-risk AI system is used as intended or under conditions of reasonably foreseeable misuse (Article 14.2).
The system of sanctions applied to employees also requires significant reform. While collective agreements and internal codes of conduct already outline misconduct for remote workers – particularly regarding equipment use and communication duties – these measures should be updated to address potential misuse of AI systems provided to employees. This is especially relevant in cases where such programs are used without the employer’s knowledge or authorization, a practice humorously dubbed “Bring Your Own AI.”
It is important to recognize that algorithmic law is supported by a substantial sanctioning framework to ensure effective enforcement of its mandates. Consequently, noncompliance by employees can create obligations for companies.
Simultaneously, existing general sanctions in labor law and specific legislation on remote work will need to be reviewed to encompass all potential breaches of employer mandates under AI regulations when using intelligent tools and programs to manage remote workers.
Moreover, algorithms have significant potential to influence disciplinary procedures by identifying possible employee breaches through behavioral analysis, pattern recognition, and comparisons among workers. While this offers advantages in terms of efficiency and objectivity in oversight, it also raises concerns about algorithmic discrimination, erroneous conclusions, and over-surveillance. For employers, utilizing these intelligent tools to detect potential breaches is already a reality that aids in directing the attention of labor inspection. Therefore, it is logical to design systems that identify and prevent noncompliance with duties related to remote workers.
As a result, both legislation and, more importantly, collective agreements will need to evolve to identify all potential noncompliances and ensure a sanctioning system that is appropriate, proportional, and fair (OECD, 2023, 221).
10.6 Final Remarks
Remote work is a form of employment that has reached an advanced and mature stage in its evolution, with a relevant presence in the labor market and a comprehensive regulatory framework, including third-generation laws in some cases. AI systems can and likely will be extensively utilized in telework environments, as this form of employment is highly susceptible to their impact due to its heavy reliance on technology.
AI systems can be extremely beneficial for remote work, helping to address structural challenges such as control, management, workflows, and psychosocial risk factors. They can enhance the monitoring of legal compliance by unions and labor administrations, making them a valuable tool for law enforcement. Additionally, technologies like digital platforms can expand telework opportunities and increase the number of tasks that can be performed remotely.
However, this progress comes with a price. Several risks for workers have been identified, such as algorithmic discrimination, excessive surveillance, technostress, and heavy workloads. Remote work exacerbates these risks compared to on-site work.
The widespread use of AI tools has prompted regulatory responses at all levels, leading to the emergence of a new branch of legal order known as “AI Law” or “Algorithmic Law.” This includes significant mandates on their implementation by employers to manage employment relations. When remote work is facilitated, managed, or controlled by an AI system, both digital law and algorithmic law apply, resulting in an accumulation of rules and mandates for employers.
The analysis of the intersection between digital law and algorithmic law, arising from the use of AI systems in remote work, reveals that this accumulation generally does not pose significant problems. Both regulations include similar instruments, such as training and information dissemination. However, AI regulation extends further, introducing new requirements like transparency, human oversight, and digital literacy. This can be attributed to their common origin and shared objectives of protecting individuals. Nevertheless, increased coordination will undoubtedly be necessary.