Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-01-11T01:11:16.481Z Has data issue: false hasContentIssue false

Generative AI in public administration in light of the regulatory awakening in the US and EU

Published online by Cambridge University Press:  06 January 2025

Sophie Weerts*
Affiliation:
Swiss Graduate School of Public Administration, University of Lausanne, Lausanne, Switzerland
Rights & Permissions [Opens in a new window]

Abstract

This paper explores the regulatory awakening regarding generative AI (GenAI) in the United States and European Union (EU) institutions with the release of ChatGPT. Based on a thematic analysis of regulatory documents, it investigates how governments have approached the deployment and use of this emerging technology within their classic government activities. The analysis shows several layers of regulatory approaches, ranging from command-and-control to an experimental approach, combined with risk- and management-based approaches. It also reveals different perspectives. The EU institutions have notably adopted more restrictive guidelines on the use of publicly available Large Language Models (LLMs) - a type o GenAI that is trained on vast amounts of text data to understand, generate, and respond in human-like language. This approach reflects greater caution about data security and confidentiality and the risks of foreign interference. However, the American and EU documents share a common concern about the risk of reinforcing discrimination and the protection of human rights. Interestingly, considering the administrative environment, neither the administrative activities in which GenAI may be used nor the key legal principles embedded by the rule of law are explicitly used for guiding administration in their development and use of GenAI. In this context, the paper calls for future research that could help contribute to the renewal of administrative law theory in the context of the digital transformation of public administration.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction

In January 2024, the European Commission (EC) stated the following: “The advent of generative AI marks a transformative shift: unprecedented possibilities for supporting our staff, reducing administrative burden, and enhancing quality and impact of the Commission’s work” (European Commission, 2024, p. 2). The Court of Justice of the European Union (CJEU) also emphasised that, in the future, all staff members could benefit from an AI-powered virtual assistant, AI avatars could be used to deliver internal training and to produce didactic materials and the case management system (SIGA) could be completed with capabilities for generating classification indicators and case correlation (Court of Justice of the European Union, 2024). Generative AI (GenAI) appears to offer a new lever for re-enchanting public administration, which has the potential to contribute to a new turn in the project of “reinventing government” through technology (Engstrom & Ho, Reference Engstrom and Ho2020).Footnote 1 It should change work flows by eliminating repetitive tasks, reducing the risk of human error, saving time in processing a file, improving service to users and supporting workers (Baily et al., Reference Baily, Brynjolfsson and Korinek2024; World Economic Forum, 2024). The optimistic view on the potential of this technology echoes the global consultancy firms’ vision on the need to modernise the functioning of governments through innovative artefacts (Word Government Summit and Accenture, 2024; McKinsey & Company, 2024).

Promises regarding the improvement or transformation of governments due to technological advancements are not new in public administration practices and literature. Since the 1990s, the deployment of the internet and information technology (IT) has improved access to and use of government services Reference Margetts(Margetts, 1995). It contributed to reframing bureaucracy for “e-government” or “smart government,” all of which are considered the technical face of the open governmentFootnote 2 (Kankanhalli et al., Reference Kankanhalli, Charalabidis and Mellouli2019; Mettler, Reference Mettler, Ladner, Soguel, Emery, Weerts and Nahrath2019; Valli Buttow & Weerts, Reference Valli Buttow and Weerts2022). More recently, deploying algorithms and big data in decision-making opened new – and worrisome – avenues in the digital transformation of public administration (Dunleavy & Margetts, Reference Dunleavy and Margetts2023; Yeung, Reference Yeung2022). In this context, GenAI could then be the next transformative layer.

So far, use cases of GenAI in public administration have been scarcely documented, and the regulation of these uses is still emerging. When such use is documented, it seems to be implemented in a rather disorganised fashion (Bright et al., Reference Bright, Enock, Esnaashari, Francis, Hashem and Morgan2024). The public administration and administrative law literature on this issue are also in their infancy (Cantens, Reference Cantens2024; Pasquale & Malgieri, Reference Pasquale and Malgieri2024). At this early stage, research work must then be exploratory. For this reason, this paper aims to address the following question: How do governments regulate the use of GenAI in the context of the pursuit of government’s classic functions and its activities? To respond to this question, we investigate relevant regulatory and policy documents adopted at the administrative, policy and political levels in the aftermath of the public launch of ChatGPT (in 2023 and 2024). We focus on the United States (US) and the European Union (EU) institutions, two key players in AI technology and regulation that share a common understanding of public administration (the rule of law) (see Bell, Reference Bell, Reimann and Zimmermann2019; Napolitano, Reference Napolitano, Cane, Hofmann, Ip and Lindseth2020). We identified a few regulators inside the US and the EU that adopted a regulatory framework early on that addresses the use of GenAI related to the execution of public administration tasks.Footnote 3 Many public authorities are yet to adopt these internal regulatory measures, and it remains unclear whether such rules will become widespread across public administrations.

To address the proposed research question, we collected data from target institutions’ official websites. Some were also obtained based on our right to access public documents, such as for the European Union institutions’ (EUIs’) guidelines. We build our analysis on learnings from regulation studies that highlighted the evolution of “command-and-control” regulatory approaches towards risk prevention and anticipation (Black, Reference Black2012; Black & Douglas Murray, Reference Black and Douglas Murray2019). We add the experimental approach, considering the deployment of regulatory sandboxes for regulating technology advancements (Ranchordás, Reference Ranchordas2021; Ranchordás & Vinci, Reference Ranchordas and Vinci2024), and the management-based approach, which has been emphasised in other fields, such as environmental protection and food safety (Coglianese & Lazer, Reference Coglianese and Lazer2003; Coglianese & Starobin, Reference Coglianese, Starobin, Richards and Van Zeben2020). To better understand the risks and liabilities involved in using GenAI by public administration, we developed various scenarios in which the technology can be deployed and the various entanglements that each scenario entails can be analysed. By doing so, we hope to bring to light the nuances that should be considered in various contexts of technology use.

Considering the ongoing regulatory trajectory of GenAI in public administration, this paper will not offer an exhaustive analysis or a definitive framework of the regulatory response from public authorities. However, it will shed light on the prevalent regulatory approaches, as well as their rationale and issues to which this burgeoning regulatory phenomenon is supposed to respond regarding the use of GenAI tools in public administration. It can already contribute to several discussions. It offers a first comparison of points of view concerning the regulation of GenAI within North American and EU administrations. It contributes to regulatory studies regarding technologies by empirically illustrating the complexity of the regulatory responses regarding this emergent technology. On a practical level, it should enlighten public authorities that have not yet determined how GenAI can be used within their service.

The following section provides some background about public administration, administrative law and technology (Section 2). Next, we map the US and EU regulatory instruments adopted in the context of the deployment of GenAI, distinguishing between command and control, as well as between preventive (or risk-based approach) and experimental measures (Section 3). Such mapping will then enable us to identify not only the legal issues established by authorities but also the intentions pursued in terms of deployment, as well as the values and principles promoted by the administrative and legislative authorities (Section 4). To conclude this explorative study, we will formulate some potential research avenues (Section 5).

2. Public administration, administrative law and technology

Before exploring the regulatory responses put in place for GenAI in public administration and the resulting key legal issues and principles, this section briefly sheds light on two aspects: first, the classic or traditional activities for a public administration (Section 2.1); and, second, the current and potential use of GenAI for pursuing administrative activities and their deployment scenarios (Section 2.2).

2.1 Administrative activities

Public administrations are pursuing action in the various fields of government (e.g., policy and law-making, social benefits, education, justice, law enforcement, tax, transport, energy and border controls). To act in these areas, public administrations must have the legal authority, and their action must respect the legal framework. These requirements are components of the legality principle, which is consubstantial with the rule of law. Deploying human resources (civil servants), as well as financial and material resources (e.g., buildings, computers, Internet access), are thus governed by the law and among the means of action also figure legal instruments, which take either unilateral (e.g., regulations, decrees, decisions) or bilateral (contract) forms.

Public administration activities can fit into three categories: the (quasi-) legislative law-making, the (quasi-) judicial making, and a final group that we will call “non-legally binding activities.”Footnote 4 Figure 1 illustrates the three categories of activities. In legislative or quasi-legislative law-making, the administration contributes to the rule-making process through preparatory work, delegated rulemaking and implementation. In this regard, the output of public administration is preparing statutes and implementing them through delegated binding acts. Legal rules govern all these activities. For example, American and EU rule and policymaking processes require public consultation (for legislative acts) or notice-and-comment procedures (for agency regulations) as a preparatory step of the legislative process. They also establish with diverse levels of obligation the adoption of an evidence-based approach, encouraging or imposing ex ante and ex post legislative evaluations.

Figure 1. Activities of the public administration.

In the quasi-judicial function, public administration exercises the adjudication function of a department or agency. This type of activity develops in the wake of social benefits and the regulation of economic activities. In this framework, the output of public administration will be the adoption of individual decisions, such as granting subsidies, authorising the opening of regulated activities or placing products on the market. Decision-making activities have a binding effect on others and are also governed by legal rules to prevent arbitrary and ensure accountability in decision-making. European and US public administrations are bound by the principles of the right to an effective remedy and fair trial (e.g., Art. 47 EU Charter on Fundamental Rights and the US due process principle; on this subject, see Citron, Reference Citron2007; Finck, Reference Finck, Cane, Hofmann, Ip and Lindseth2020).

The last group of activities covers an array of factual situations and actions. From this perspective, they bring together acts and activities that are not legally binding for people outside the administration. This category covers purely material activities, such as building a road, watering a park, publishing press releases and information campaigns or using software. It includes various governance frameworks and documents produced in the context of pursuing administrative actions such as policies, plans, strategies and other guidelines. It also concerns preparatory, interpretative, confirmatory and informative measures (e.g., when a civil servant answers a citizen’s question). Some of these actions have an internal administrative effect on civil servants, agencies and authorities (e.g., a plan or a strategy). From an external point of view, they do not have a binding effect on people outside of the administration. They remain, in principle, outside the scope of judicial review of public administration.Footnote 5

Technology deployment and use can be categorised as falling into the “non-binding act,” but technological tools can also be viewed as a means of implementing each of the situations described in the three categories. They can then have legal and ethical effects depending on the context of their deployment. Indeed, public administrations and agents are bound by administrative law principles such as legality, transparency, good administration, accountability, prohibition of arbitrariness, equality, good faith and proportionality. They are also constrained by all other legal rules that must apply to their activities, such as official secrecy, public procurement law, copyright law and data-protection law. Historically, IT in public administration was first concerned with deploying computers, connecting services to the internet, putting webpages in place and, finally, connecting with people. Their deployment was thus an organisational and intern administrative issue, echoing this dimension that has shaped the division between public administration studies and administrative law (Metzger, Reference Metzger2014). However, the widespread use of advanced technologies has raised the question on how to conciliate some of the technology’s features with administrative law principles. The following section introduces the various scenarios in which the use of GenAI could raise legal (and ethical) issues.

2.2 Use cases and scenarios of GenAI

Because IT is first a question of internal organisation, GenAI systems – used as tools or services – can be viewed as technical means with the potential to contribute to or carry out diverse administrative activities. GenAI systems can help civil servants complete their various tasks. Concerning how the technological apparatus is developed and integrated into public administration, numerous possibilities are open to governments (in-house development and implementation, outsourcing or mixed solutions). These possibilities entail differences in the sharing and attributing obligations and liabilities. In the following paragraphs, we exemplify a number of real-life initiatives that illustrate the various possibilities open to the government. We then organise these possibilities into three scenarios that shed light on the different assignations of obligations and responsibilities.

Some of the already documented or envisioned uses of GenAI comprise the following. The state of California asked technology companies to propose GenAI tools that could help in reducing traffic jams and making roads safer for pedestrians, cyclists and scooter riders (Sowell, Reference Sowell2024). Spain and Denmark are considering using GenAI in their policy of promoting less-used languages (IBM Newsroom, 2024). The EC has developed a tool known as “etranslation.” Additionally, the Commission and the Court intend to deploy many AI solutions. The EC projects the use of AI tools to enhance “document summarisation capabilities, streamlining the preparation of briefings and responses to questions” in order to introduce “a conversational platform that supports non-classified human-like dialogues” and to provide “services to leverage the vast data, information, and knowledge base that the administration has across various business areas” (European Commission, 2024, p. 7). The EC is also exploring GenAI and LLM technologies to help case handlers deal with complaints in shorter time spans by providing “smart search,” or semantic search capabilities, and “smart drafting,” allowing for the reuse of past replies from similar complaints (European Commission, 2024, p. 7). The Court of Justice of the European Union (CJEU) hopes that all staff members can benefit from an AI-powered virtual assistant, AI avatars will be used to deliver internal training and to produce didactic materials, and SIGA will be able to generate classification indicators and case correlation (Court of Justice of the European Union, 2024). All of these projects show that GenAI tools will permeate all types of administrative activities, from supporting staff in preparing binding and non-binding acts to engaging with the public.

Deploying GenAI to pursue administrative action can follow three scenarios. The first scenario occurs when the administration develops its own GenAI system. For example, the IT unit of the Directorate-General for Translation of the EC developed tools based on LLMs such as eSummary and eBriefing.Footnote 6 eSummary is built on an open-source LLM accessible on HuggingFace. eBriefing uses commercial LLMs to increase the robustness of the tool. Such GenAI systems can also take the form of chatbots: the French AI chatbot “Albert” supports civil servants in their daily tasks (Koller, Reference Koller2024), the Portuguese Sigma chatbot responds to citizensFootnote 7 and, soon, the AI chatbot “Aristote” should answer to French students. Some tools are only for internal use (eSummary and eBriefing, Albert), whereas others are accessible to citizens (e.g., Sigma). A second scenario occurs when the public authority purchases access to an LLM-based tool from a third party that fine-tunes the AI system based on the needs and data of the user – the administration. For example, in the US, the Defense Department tested a GenAI tool (“Acqbot”) to help agencies write contracts and speed up the federal acquisition process, developed by a private company.Footnote 8 The last scenario happens when, in the scope of their administrative tasks, agents use a “public” GenAI system (i.e., a solution accessible to everyone either free of charge or for a fee). Such a situation happens when using tools such as ChatGPT or Bard. Figure 2 illustrates the various scenarios.

Figure 2. Scenarios.

Each scenario will depend on political choices. From this perspective, Scenario I requires human, financial and computing resources. It requires a model, meaning a neural network architecture including – for instance, transformers, auto-encoders, generative adversarial networks and diffusion models that have learned patterns from a large set of data.Footnote 9 Alternatively, it may use an open-source model that is fine-tuned on internally collected and pre-processed training data, as well as internal technical expertise. Such a scenario implies financial investment and a long-run perspective on technology and public administration. It is also strongly aligned with the plan for increased digital sovereignty for public administration, a key challenge in cybersecurity and industrial policy for governments. Scenario II follows the commercial logic of the software world between “off-the-shelf” or tailored solutions, meaning a readily available solution designed for a more extensive user base. These solutions are mainly based on private actors and are market oriented. They involve outsourcing human and technical requirements and imply a limited control of the administration over its tool, which will be regulated by the service provider’s contractual provisions and conditions. Scenario III echoes the data platform model already established by social media companies, where services are freely accessible under the counterparty of users’ data. It does not require specific financial, human or IT resources, but the protection of confidentiality, personal data or copyrights can be fragile. In terms of cybersecurity, every scenario carries a different distribution of risks and perspectives on dangerous “backdoors” for unauthorised actors, and usage is without guarantee. Additionally, it entails specific risks concerning the training data (internal and external) that may have been negatively impacting the model’s performance.

To sum up, from an administrative law perspective, deploying GenAI in public administration can be considered an aspect of the non-legally binding measures that an administration can put in place. Nonetheless, GenAI can be used by public administration for different purposes and contribute to law-making, decision-making, policy adoption and implementation activities. They are, therefore, likely to have a legal effect on people outside the administration. Hence, their use may require regulatory or even legislative measures to ensure that with such technological means, the public administration remains faithful to the administrative law principles such as legality, transparency, good administration, accountability, prohibition of arbitrariness, equality, proportionality and good faith. Some of these principles also bind administrations in their internal activities. Additionally, legal rules, such as official secrecy, copyright law and data protection, can be relevant in given contexts, constraining public agents and public administrations. In the following section, we analyse how GenAI has been regulated so far in US and European institutions.

3. Exploring the regulatory measures for governing GenAI

In this section, we analyse the US’s and EU’s regulatory measures adopted in the aftermath of the ChatGPT release in November 2022. From a legal perspective, authorities have responded using various types of regulatory instruments available. The ensemble of various types of regulatory instruments is often referred to as a “regulatory toolbox.” This section focuses on these regulatory documents, understood as all kinds of documents, requiring administrations and agents to comply with rules and procedures regarding implementing such technology in public administrations (Fisher & Shapiro, Reference Fisher and Shapiro2020). These regulatory documents form a disparate assemblage of measures. In an attempt to systematise them, we propose to apply to their study some of the regulatory categories informed by regulatory studies: command-and-control, prevention (or risk-based), experimentation and management based. Considering that the American states were the first to regulate the deployment and use of GenAI systems in their administration, we start with the US (Section 3.1). Then we will explore the EU’s responses (Section 3.2). Figure 3 illustrates the different regulatory approaches in the regulatory toolbox.

Figure 3. Regulatory toolbox.

3.1 In the United States of America

In the US, both levels of government adopted regulatory measures presenting a continuum of responses, from prescriptive to experimental solutions. Because the American states were the first to address the question of deploying GenAI in state departments and agencies, we start by analysing their executive orders (EOs) (Section 3.1.1) and then consider the federal response (Section 3.1.2).

3.1.1 Regulating GenAI at the state level

The scope of analysis is limited to the first eight states that regulated the deployment of AI by administrative bodies with EOs: Kansas (July 25), Wisconsin (August 23), California (September 6), Oklahoma (September 25), Virginia (September 29), Pennsylvania (September 29), New Jersey (October 10) and Oregon (November 28) (Do, Reference Do2023). Not all EOs especially mention GenAI or provide the same degree of precision regarding the regulatory measures put in place. For this reason, we concentrate on Kansas, California, Pennsylvania, Oklahoma and New Jersey.

First, the EOs include a series of measures that can be ranked as command-and-control. In this regard, only Kansas and Pennsylvania EOs contain direct injunctions regarding the use of GenAI. Kansas policy lays down rules that establish situations in which the use of AI must be limited; it states that the AI output “shall not be assumed to be truthful, credible or accurate, be used to issue official statements (i.e., policy, legislation or regulations), not be treated as the sole source of reference, not solely be relied upon for making final decisions, not be used to impersonate individuals and organisations.” Providing restricted information or copyright and property materials is also prohibited when interacting with GenAI in the Kansas legislation (State of Kansas, 2023, 9.2.3 and 9.2.5). Contractors must disclose their utilisation or integration of GenAI in their programmes (State of Kansas, 2023, 9.2.7). They cannot use restricted information or other confidential data in GenAI queries for building and training proprietary GenAI programmes unless explicitly approved by the agency head with consultation from the chief information security officer (State of Kansas, 2023, 9.2.8).

Other measures are oriented towards prevention and echo a risk-based approach. Such a preventive approach requires, for example, conducting a preliminary assessment and reviewing responses generated from GenAI outputs by a “knowledgeable human” (State of Kansas, 2023, 9.2.1). When using GenAI tools, agency external-facing services or dataset inputs or outputs shall disclose the use of AI and what bias testing was done, if any (State of Pennsylvania, 2023). A few measures express an experimental approach. In this regard, some EO encourage state agencies to pilot GenAI projects and explore new ways of utilising technology to improve services and administration (State of California, 2023, Sections 3f, 3g; State of Pennsylvania, 2023, Section 4).

Beyond these regulatory measures, EOs share a common idea around the need to set one or several governance bodies for monitoring AI systems and GenAI. They put in place governance bodies (such as AI task forces or GenAI governance boards), regulatory bodies (AI council) or mandated state agencies (State of New Jersey, 2023).Footnote 10 These bodies play different roles in implementing measures to ensure information, training, planning and evaluation. In this context, some bodies are required to gather information about emerging technologies, assess their impacts on vulnerable communities or collect salient cases.Footnote 11 Some EOs also require the governance bodies to update knowledge among civil servants (State of California, 2023, Section 5; State of Oklahoma, 2023, Section 7a; State of Oregon, 2023; State of Pennsylvania, 2023, 4.a.6). They are charged with drafting new guidelines regarding public procurement and the use of GenAI tools, whenever the EO did not include such rules itself (State of Oklahoma, 2023). Some EOs also require the administration to adopt a comprehensive approach regarding GenAI, from the design, development and procurement to deploying GenAI tools. All of these elements echo another category of regulation: management-based regulation (Coglianese & Starobin, Reference Coglianese, Starobin, Richards and Van Zeben2020).

3.1.2 In the federal government

On 30 October 2023, President Biden signed EO 14110, regulating the use of AI systems and GenAI in federal departments and agencies (Exec. Order No. 14110, 2023).Footnote 12 The EO is intended to bring uniformity to administrative practices, following scattered and contradictory responses from US federal agencies. For example, due to privacy and security concerns, the Environmental Protection Agency did not permit their employees to use GenAI tools for official use (Borgardus, Reference Borgardus2023). However, the Defense Department was testing a GenAI tool to help agencies write contracts and speed up the federal acquisition process. This president’s EO adds a new regulatory layer to a range of legislative and executive documents on AI, such as the Blueprint for an AI Bill of Rights and the NIST “AI Risk Management Framework.”Footnote 13 Moreover, two internal documents extend the EO on the issue of the deployment and use of GenAI in federal departments and agencies: the Office Personnel Management guidance on “Responsible Use of Generative Artificial Intelligence for the Federal Workforce” (Office of Personnel Management, n.d.) and the Memorandum Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence of the Office of Management and Budget (OMB) (Office of Management and Budget, 2024). Also, in the federal EO, we find the four regulatory approaches – command and control, preventive, experimental and management based – as we identified at the state level.

The federal EO poses no general ban or block on using GenAI in federal departments and agencies (Section 10, f, i). However, the Office of Personal Management (OPM) offers general guidance on using GenAI in the logic of command-and-control. It includes few principles such as “do not input non-public information into public GenAI interfaces”; “Do not leave GenAI technology to operate autonomously without human accountability and control”; “Review all AI-generated materials, including sources cited, to check that they are valid, accurate complete, and safe”; “Do not tamper with or attempt to turn off guardrails against unsafe, offensive, and misleading inputs or outputs of GenAI systems” and “follow your agency’s policy on when and how to disclose uses of GenAI” (Office of Personnel Management, n.d.).

More broadly, the federal EO requires that federal agencies adopt a risk-based approach when deploying AI systems if they impact the rights and safety of the public. It contains a series of measures regarding GenAI and the “dual foundation model,” which must be developed by the agencies as part of their regulatory activity. For example, it requires that the director of OMB provides guidance that “shall specify, to the extent appropriate and consistent with applicable law, (…) external testing for AI, including red-teaming for GenAI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency” (Section 10, b, viii [A]). It shall also include testing and safeguarding against discriminatory, misleading, inflammatory, unsafe, deceptive, unlawful or harmful outputs for GenAI (Section 10, b, viii [B]) and the steps to watermark or label output from GenAI (Section 10, b, viii [C]). Moreover, the OMB’s Memorandum provides a list of presumed risks when AI can be safety impacting, such as controlling or meaningfully influencing the outcomes of a series of activities (e.g., dam management, emergency services, electrical grids, transportation, industrial emission, environment impact control process) (Appendix I, 1) or when AI can impact rights in controlling or meaningfully influencing the outcomes of activities and decisions regarding individuals (e.g., free speech, law enforcement, immigration and asylum, measuring emotions, decisions regarding medical devices, medical diagnose tools, loan-allocation process, government benefits, decisions about child welfare or child custody) (Appendix I, 2). GenAI can be deployed regarding some of these activity fields. In such a case, agencies will have to carry out an impact assessment on AI, test the performance of AI in a real-world context and carry out an independent evaluation, which must cover the two previous elements (Section 5). For each aspect, the memorandum enumerates a long list of measures. For example, the impact assessment will cover the objectives and expected benefits, the potential risks and the stakeholders who will be most affected. In this respect, it also requires that agencies not use AI if the benefits do not significantly outweigh the risks (Section 5, c, iv, A, 1). The impact assessment must also include evaluating the quality and adequacy of the data used for the AI’s design, development, training, testing and operation (Section 5, c, iv, A, 3). In the case of GenAI, which was developed by private actors, agencies may find themselves in a situation where they do not have access to the data. In this case, they must obtain sufficient descriptive information from the supplier to satisfy the following requirements: provenance and quality of the data.

The EO encourages the adoption of some provisions that embrace the experimental approach to regulation. For example, agencies are encouraged to provide access to secure and reliable GenAI capabilities for the purpose of experimentation “that carry a low risk of impacting American’s rights” (Section 10.1, f [i]). It also authorises “AI red-teaming,” which is described as a “structured testing effort to find flaws and vulnerabilities in AI system, often in a controlled environment and in collaboration with developers of AI” (Section 3, d).

Finally, by adopting an innovation-oriented mindset, the EO encourages agencies to increase their capacity to successfully and responsibly adopt AI, including GenAI, into their operations. To achieve such a goal, agencies must develop an enterprise AI strategy (Office of Management and Budget, 2024, Section 1). These last provisions exemplify that the management-based approach also infuses the regulation of GenAI.

3.2 EU institutions

On the other side of the Atlantic, EUIs also support the use of GenAI in public administration. EUIs published their internal strategies to advance AI development and deployment, and the European legislator adopted the AI Act, which contains provisions that apply to actors throughout the AI supply chain, regardless of whether they are private or public. The European Data Protection Supervisor also indicated that there is no obstacle in principle to develop, deploy and use GenAI systems in the provision of public services, providing that the EUIs’ rules allow it, and that all applicable legal requirements are met, specifically considering the special responsibility of the public sector to ensure full respect for the fundamental rights and freedoms of individuals when making use of new technologies (European Data Protection Supervisor, 2024). Our investigation scope is limited to the EC, the CJEU and the Artificial Intelligence Act (AI Act) for data limitations reasons and due to the fact that the selected institutions share the same perspective.Footnote 14 For their analysis, we apply the same analytical grid distinguishing between command-and-control, risk-based, experimental and management-based approaches.

From a chronological perspective, internal guidelines regarding public GenAI tools, such as ChatGPT, were the first documents adopted by the European bodies (European Commission, 2023)Footnote 15 Formally, these guidelines are not legally binding documents. However, they sometimes refer to legal obligations and convey injunctions to their staff members in a logic of command-and-control. In this respect, the EC and CJEU guidelines present a close similarity.Footnote 16 The EC guidelines were first drafted in 2023 by the Information Management Steering Board. The document is designed to help staff assess the risks and limitations of such tools. It also clearly indicates that these risks and limitations are not necessarily relevant for internally developed GenAI tools (and, therefore, wouldn’t be relevant for Scenario I). For those, the assessment is regulated by the “corporate governance for IT systems.” The EC guidelines document is described as a “living document” that will need to be updated as needed, while taking the AIA into account. It states five rules. These rules are especially relevant when the use of GenAI entails a Scenario III context. First, “staff must never share any information that is not already in the public domain, nor personal data, with an online available generative AI model” considering that “staff are bound by art. 17 of the Staff Regulations, which forbids the unauthorised disclosure of any information received in the line of duty unless it is already public.” Rule 2 requires that “(s)taff should always critically assess any response produced by an online available generative AI model for potential bias and factually inaccurate information key issues for public administration services.” Rule 3 is “(s)taff should always critically assess whether the outputs of an online generative AI model are not violating intellectual property rights, in particular copyright of third parties.” Rule 4 states that “(s)taff shall never directly replicate the output of a generative AI model in public documents, such as the creation of Commission texts, notably legally binding documents.” Finally, “(s)taff should never rely on online available generative AI models for critical and time-sensitive processes” (European Commission, 2023). Similarly, the CJEU also fixes five principles regarding using non-approved online available GenAI tools (Scenario III) (Court of Justice of the European Union, n.d.).Footnote 17 One principle differs from the EC guidelines. It recommends that “staff members discuss and agree with their colleagues on how to ensure human quality control to the information generated via AI tools. They should ensure, through human review, of all outputs based on AI inputs, after careful reflection on the need to use generative AI” (Court of Justice of the European Union, n.d., pp. 3–4).

Moreover, in 2024, the two European institutions adopted their own AI strategy, which plays an intern organisational role and guides administrative action. These documents echo the preventive or risk-based approach and management-based approach. They put an overarching framework for action inside each institution and specific plans to achieve the desired outcomes. In this regard, both strategies strengthen the IT governance regarding AI, which will play a decisive role in the assessment, implementation and evaluation of each AI system and GenAI system. For example, the European Commission Technology and Cybersecurity Board (ITCB) is in charge of conducting the preliminary assessment of AI initiatives in compliance with ethical, legal and regulatory requirements (European Commission, 2024, p. 5).Footnote 18 The CJEU added two new bodies to the first two already responsible for architecture and data governance. The first takes the form of an initiative whose mission is to identify potential areas in which AI tools can bring benefits and to set up pilot projects/prototypes for the use of AI tools in order to test their benefits (European Commission, 2024, p. 5).Footnote 19 The second structure is a committee that drafts policies and guidelines.Footnote 20 Strategies also identified that potential new use cases for AI systems within GenAI can play a determinant role, such as “[supporting] the legislative process, policy monitoring and responses to parliamentary questions” in carrying out an impact assessment of major legal proposals, searching and analysing legislation, assessing the impact of new legislation on existing European and national legislation, supporting legislative (thematic) negotiation or “[supporting] the drafting of non-sensitive content for briefing, reports, and other documents” (European Commission, 2024, p. 9).

Beyond these administrative documents, the European legislator adds a new layer of obligations by adopting the AIA (Regulation (EU) 2024/1689). It includes a complex layer of compliance obligations that follow the preventive regulatory approach and will bind public administration in the EU. In this respect, the scope of obligations will vary depending on the context of the purpose (high risks or not) and whether the public administration is a provider or only a deployer of the GenAI system, underling the importance of determining the scenarios in which the technology is developed and deployed.Footnote 21

In the case of developing a GenAI model to carry out an activity that is not categorised as representing a high risk, the administration acts as a provider. It has to fulfil a series of obligations. For example, in the case of IT services developing an in-house general-purpose AI system (Scenario I), it “shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated,” except if the system performs “an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof” (art. 50 [2], AI Act). Except under certain circumstances and unless the interaction with the machine is obvious, the administrations will also have to ensure that citizens who interact with the GenAI system are informed in a clear and distinguishable manner of the interaction with a machine (art. 50 [1] and [3], AI Act). It is important to underline that when considering compliance with this obligation, one must consider not only the average person but also persons belonging to vulnerable groups due to their age or disability (Recital 132). As providers, the administrations will have extra transparency obligations in making publicly available a sufficiently detailed summary of the AI model’s content. This summary should include the main data collections or sets that went into training the model (art. 53 [1] AI Act). If the administrations want to use a tailored GenAI system or integrate a GenAI model interface in their own AI system, they will also need to show diligence in getting access to technical documentation and to a comprehensive summary regarding the datasets, except if the AI model is under a free and open access license (and 53 [2], AI Act). Exception is justified because “open-source licences should be considered to ensure high levels of transparency and openness if their parameters” (Recital 102). Administrations can also be in the position of deployers when using a GenAI system. In this case, they must disclose that the text has been artificially generated or manipulated (art. 50 [4] AI Act). However, such obligation does not apply where the use is authorised by law for criminal prosecution, where a human reviews the generated content or makes an editorial control, and where a natural legal person is responsible for the content publication.

However, it cannot be excluded that GenAI systems could be used to make decisions affecting the terms of the work, as well as access to essential public services and benefits, or contribute to the administration of justice, such as assisting a public authority in interpreting facts and the law. Such purposes will then fall into the high-risk category established in Annex III; in this circumstance, the compliance obligations could be more difficult to disentangle, but two options can be identified: the first one when there is an agent on the decision loop or in the editorial role; at that time, it could be considered that the GenAI system does not play a determinant role. Also, in this case, there is no need to comply with the specific obligations resulting from the high-risk deployment domain (Recital 53). The second option assumes that GenAI’s impact on decision-making is not incidental or the decision-making process will not ensure a decisive role for human control.Footnote 22 The degree of autonomy of the human-in-the-loop with regard to technological aid should be the subject of a previous assessment that should be documented (Recital 53). In the second case, if the intervention of an AI system is not considered merely incidental, the administration acting as a provider will have to comply with a series of obligations, such as implementing the risk management system for monitoring and mitigating risks to which GenAI can contribute regarding notably fundamental rights (art. 9, AI Act), putting in place safeguards regarding the data preparation and data governance (e.g., bias-detection process) (art. 10, AI Act), keeping up-to-date technical documentation (art. 11, AI Act) or ensuring accuracy, robustness and cybersecurity of the tool (art. 15, AI Act).

Additionally, the analysed documents show that the EU and its institutions want to promote AI innovation through an experimental approach. Such an experimental approach is also promoted in the various documents. The CJEUand EC strategies mention a two-stage approach. The first is explorative, meaning that the institutions deploy and test AI solutions in their services. The second is the “industrialisation phase,” which can be deployed when there are positive results (European Commission, 2024, p. 9). This is the case with GPT Lab at the DGT and the GPT@JRC and AI sandbox at the JRC (European Commission, 2024, p. 9). Other initiatives could be developed through the AI@EC network. The ECJ document indicates that the last three years were the occasion of an experimentation phase, with 30 projects within two-thirds being viable. They relied on an AI + Network and implemented an Innovation Lab where they meet industry. However, if there is a regulatory assessment by the ITCB at the implementation stage, the test protocols remain expected. The experimental approach is also embedded in the AI Act, which authorises experimentation through regulatory sandboxes (art. 57 (1) AI Act) and real-world experiences (art. 60 AI Act). Regulatory sandboxes will be conducted by and under the supervision of the European Data Protection Authority (art. 57 (3) AIA).

The EU regulatory documents follow the same regulatory path as those in the US, combining command-and-control aspects, as well as risk-based, experimentation and management approaches. In the next section, we will thus discuss their convergences and nuances.

4. Convergences and nuances in the regulatory awakening

Before highlighting the convergences and nuances among the regulatory approaches deployed in the US and in the EUIs, it must be underlined that the regulatory documents express a clear and shared belief that GenAI is a supporting and necessary tool for public administration staff in the AI era. This supportive dimension is illustrated by the fact that no absolute prohibition has been proclaimed, and if sometimes severe limitations have been imposed, these relate to specific cases (e.g., the use of GenAI solutions in free access, as in Scenario III). Furthermore, in some cases, expectations raised by this technology appear to be extremely high, echoing the somewhat naïve belief that technology will be able to solve all the problems and criticisms of bureaucracy. For example, in the ECJ case, GenAI is supposed to enhance the effectiveness of the administrative and judicial processes, contribute to the ideal of justice by fostering quality and consistency of judicial decisions or increase access and transparency for EU citizens (notably for people with disabilities, who could benefit from chatbots, virtual assistants and AI avatars that could support their quest for information). Such a vision seems to forget that technology is always rooted in a social, economic and political context and that simply deciding to implement or use it is not enough for an organisation’s recurring problems to evaporate.

The risk-based approach is the means to pave the way forward for the deployment of GenAI. The regulation of GenAI use by public administrations takes the form of a series of compliance measures and establishes a governance structure dedicated to AI, which is called upon to play a key role in meeting compliance obligations. There is also a shared commitment that the technology and its uses comply with human rights. The EC will refrain from using AI systems incompatible with European values or threatening security, safety, health and fundamental rights. The US federal EO announces that all existing AI systems should be assessed and dismantled in case of incompatibility with civil rights. Using GenAI tools needs to be human centred and equitable, testing for and protecting against bias so that it does not favour or disadvantage any democratic group or others. Agencies should ensure that they do not unlawfully discriminate against or disparately impact individuals or communities based on or due to ancestry, disability, age, sex and national origin, familial status, race, gender, religious creed, colour, sexual orientation, gender identity or gender expression. On this point, the US NIST framework could serve as a landmark regulatory tool.

The preventive and management-based approaches do not exclude certain explicit and implicit constraints in the logic of the command-and-control regulatory approach. In this regard, GenAI represents a problem regarding secrecy in conducting public affairs and the secret duty of public agents, especially in Scenarios II and III. It also represents a risk for civil servants of breaching the law (and violating the principle of legality). All input data – including prompts – are exposed to disclosure risk. Input data can cover all types of data in possession of the public administration (e.g., personal and other proprietary data, closed and open government data), and data reuse can be subject to legal requirements (such as personal consent or legal basis). Civil servants could commit professional misconduct and be liable for disciplinary or criminal action if protected data are entered into the model. A second critical legal issue is discrimination. Public administration is required to respect the principle of equality and non-discrimination among individuals. Here, again, the large datasets on which GenAI systems are built make the task of combating bias particularly sensitive. Moreover, whatever the contractual arrangements regarding data, there is a risk of bias in the output, jeopardising impartiality and non-discrimination principles. Bias and discriminatory behaviour are intrinsic characteristics of GenAI models; therefore, the complete protection against this problem can be considered a technical challenge. Asserting mechanisms that counter this problem by flagging problematic categorisation remains a significant technical challenge. Moreover, the possibilities of legal responses to discriminatory algorithmic behaviour are scarce due to the difficulties of detecting and acting against it (Beaulieu & Leonelli, Reference Beaulieu and Leonelli2022; Pasquale & Malgieri, Reference Pasquale and Malgieri2024).

The experimental approach has also been provided in some regulations, but it remains in the shadow of the other ones. The EU strategies put in place AI governance for experimenting with in-house solutions (Scenario I) and guidelines indicating strong caution regarding online available tools (Scenario III). The AI Act also favours open-access models, which benefit from fewer compliance obligations. On the other side, the US documents emphasise the need to update the public procurement process, opening up opportunities to adopt Scenario II. For example, states’ EOs mention the importance of reviewing regulations on public procurements (State of California, 2023, Section 3; State of Pennsylvania, 2023, Section 4). The federal documents also emphasise the need to adapt the regulatory framework for acquiring solutions developed by private players. Federal agencies are encouraged to submit proposals to the Technology Modernization Fund to fund projects, particularly GenAI, to support mission execution (Section 10 [g], from the EO). They are encouraged to negotiate “appropriate terms of service with vendors” (Section 10.1, f, [i], from the EO). The General Services Administration, in coordination with other departments, must develop a resource guide to take actions consistent with applicable laws to facilitate access to federal government-wide acquisition solutions for specific types of AI services and products, including GenAI solutions (Section 10.1, h, from the EO). In addition, the US administration has to develop and publish “a framework for prioritising critical and emerging technology offerings (…), starting with generative AI offerings that have the primary focus of providing large language model-based chat interfaces, code-generation and debugging tools, and associated application programming interfaces, as well as prompt-based image generators” (Section 10.1, f(ii), from the EO). It also contributes to pointing out various economic approaches to administrative transformation by deploying GenAI. These EU and US perspectives show different economic logic, which could contribute to their industrial policy regarding technology. Indeed, although the technology-oriented industrial policy of the EU is increasingly concerned with sovereignty and has relied greatly on public funding research and development, the US focuses on the development of public–private partnerships, relying on a more dynamic private sector focused on technology (Edler, Reference Edler2024). Furthermore, the European approach can also be questioned, considering the current trend for big data and the needed computational resources for LLMs. In this sense, it seems unrealistic to imagine that governments do not rely on outsourcing regarding the GenAI models, despite the cautionary approach adopted by the internal guidelines about online freely available GenAI.

The analysis also reveals that regulators are sensitive to the risk of personal data and copyright breaches and confidential information exposure. Regulatory measures also emphasise the importance of transparency concerning technical documentation and tracing process obligations. However, such transparency does not meet the legal understanding of transparency, which is a key legal value from a rule of law perspective. In administrative law, transparency might be understood as the minimal openness process, including access to documents and publications of official measures (Hofmann, Reference Hofmann, Peers and Barnard2014). It is the basis for a whole series of individual rights guaranteeing political participation and protection against arbitrariness (right to an effective remedy and fair trial). In this regard, transparency is intrinsically related to accountability. It has to be respected regardless of the means (Finck, Reference Finck, Cane, Hofmann, Ip and Lindseth2020). In this regard, the documentation and explanation approach does not resolve the technology opacity because the content generated cannot be wholly explained on a purely technical level.Footnote 23 The issue already arose with algorithmic tools, and alternative options have been debated between fully disclosing training data and source code, ensuring the explainability of the system through procedural guarantees, providing counterfactual explanations or limiting the machine’s autonomy and placing it under the decision-making power of the human.

However, other important administrative law principles have been overlooked. For example, the legal certainty principle requires administrations to act consistently, enabling individuals to regulate their conduct according to the law. It thus protects individuals from contradictory and inconsistent decisions. Legal certainty guarantees that demands will be dealt with within a reasonable time. The potential shortcomings of the AI model due to the data and the design of the algorithms are generally considered an issue in terms of reliability and certainty (Floridi, Reference Floridi2023). Hallucination and output incoherence might cause tension with principles of legal certainty and protection of legitimate expectations, both related to the principle of legality. The principles of proportionality and precaution concerning environmental impact, which are nonetheless transversal principles in European law, do not feature in the points to be discussed before deploying solutions operating with GenAI. On this point, we might wonder whether the regulatory measures do not ultimately reflect a discourse and an eagerness, which are not always justified, to the point of missing some essential issues. Such a movement is in clear opposition to one of the first lessons to be learnt with regard to better regulation movement: making decisions based on facts rather than emotions.

Overall, analyses of the whole ensemble of documents coming from various legal traditions converge in what is regarded as the most salient human rights issues arising from the use of GenAI: The pejorative effect that it could have in discriminatory behaviour and arbitrariness and the risks that it represents concern the protection of privacy and protected information. On these points, all institutions seem well aware of the risks and warn their collaborators about them. What remains to be seen is how this awareness will be translated into the effective prevention of, compliance with, and enforcement of these regulatory frameworks. In this regard, what might play a significant role in the real-life effects of these regulations is the fact that they are (or not) backed up by legally binding dispositions that are provided with enforcement measures. In this effect, the institutional frame of the EU may have a relative advantage; it seems that it is indeed backed up by the recently adopted AI Act combined with the General Data Protection Regulation. However, the implementation of the AI Act is neither easy nor simple, so it remains to be seen if the public administrations across the ocean will, in the near future, walk parallel or divergent paths.

5. Conclusions

This paper offers a preliminary analysis of this regulatory awakening regarding GenAI in public administration. The analysis was inspired by regulatory categories developed in regulation studies. It maps a set of measures echoing these different regulatory approaches to ensure trustworthiness around AI technology. This analytical grid contributes to highlighting some nuances. These are marked in terms of the openness to private technology players. On this point, the European institutions are indicating a desire to develop in-house solutions with a view to digital sovereignty. The Americans, who benefit from a national network of technology companies, do not share the same concern, as underlined by the importance given to the issue of updating public procurement documents and procedures. Furthermore, beyond systematising the various measures anchored in the regulatory documents, the paper shows the key issues for public authorities regarding the use of GenAI. These are the key points that have been most discussed in the press and by experts. However, regulators are largely left in the shadow of other key legal issues, such as how GenAI can operate according to the legal principles that should govern any administration. The documents also largely ignore the environmental and societal impact of GenAI.

Preliminary analyses do not pretend to offer exhaustivity, and they are exploratory. In this regard, they help to formulate paths for future research. Also, the state of development regarding GenAI limited the scope of our research to a regulatory perspective. For legal research, case law will come later, allowing legal dogmatic analysis. In the meantime, sociolegal studies should be carried out to understand, for example, how civil servants use technology and integrate the principles of good public administration and other legal issues already pointed out in this paper into their use of technology. Moreover, GenAI tools are human-enhancing technologies; they are not purely technical but sociotechnical. From this perspective, questions should arise about the trade-offs concerning expertise and technological solutions and the effects of this broad use on the collective life and administrative legal principles developed to guarantee accountability and prevent arbitrary action from the public administration. Furthermore, from a regulation studies perspective, the experiment-based approach could be the next generation of regulatory approaches. All these research perspectives could then contribute to revisiting the theory of administrative law in the context of state activities’ digital transformation.

Acknowledgements

The author would like to thank Professor Martin Ebers and Professor Cristina Poncibo for inviting her to submit an updated version of her contribution to the CUP Research Handbook on Generative AI and the Law.

Funding statement

The author received no financial support for the research, authorship and/or publication of this research.

Competing interests

The author declared no potential conflicts of interest with respect of the research, authorship and/or publication.

Footnotes

1 The “reinventing government” project was driven by US presidency Office and supported by private companies such as IBM (Engstrom & Ho, Reference Engstrom and Ho2020).

2 The idea of open government can be understood as the set of transformation in the governmental apparatus that would embrace transparency, participation and accountability. Technological innovations would mediate this transformation. To see more on Open Government as an international movement, check Piotrowski, Berliner and Ingrams (Reference Piotrowski, Berliner and Ingrams2022).

3 The early regulators were identified thanks to the work carried out by journalists and non-profit organisations. In this respect, we are grateful for the monitoring work carried out by the Euractiv journal (Bertuzzi, Reference Bertuzzi2023), the future for privacy forum (Do, Reference Do2023).

4 This distinction attempts to group the different activities of the administration inspired by the work on the different instruments for pursuing public tasks of Alexandre Flueckiger ((Re)faire la loi. Traité de légistique à l’ère du droit souple, Staempfli Editions, 2019). The first is the classic distinction in administrative law between the creation of legislative acts and those of individual scope (decision). Depending on the legal system, activities which may be exercised by parliament and the government, but also by bodies to which a delegation of power has been entrusted. To this distinction, we add – and extend – the category of non-legal acts, which includes purely material acts and acts of informational scope. In this latter category, we also include acts of a political and organisational nature, as well as those with a legal effect that varies depending on who is reading it.

5 However, non-binding acts are sometimes reviewed by courts. In this regard, the CJEU upholds the action of annulment against non-binding measures if these measures have an obligatory legal effect on the interest of a private person or a legal effect vis-à-vis third parties. See: C-294/83, Les Verts v. Parliament [1986] ECLI:EU:C:1986:166.

9 However, this scenario could be more complex. The administration can contract out its IT services to an external service provider. The latter would then be responsible for the technical development of generative AI. In the latter case, however, there will be other important legal issues to settle in the contract concerning intellectual property rights and IT security issues, thus weakening the guarantee of digital sovereignty.

10 The New Jersey’s EO charged the office of information technology to develop a policy to govern and facilitate the use of AI by executive branch departments and agencies and to evaluate tools and strategies to improve government services through AI (10.c). The office of innovation shall develop a training program for staff members (10.d).

11 The New Jersey EO charged its AI task force to studying emerging AI technologies in order to issue findings on the potential impacts of these technologies on society and offer recommendations to identify government actions appropriate to encourage the ethical and responsible use of AI technologies (1).

12 The EO does not apply to the government accountability office, the federal election commission, the governments of the District of Columbia and of the territories and possessions of the United States, and their various subdivision and the government-owned contractor-operated facilities, including laboratories engaged in national defence research and production activities, for security reasons (44 U.S.C 350 (1)). In this framework, the scope of application covers executive departments, public corporations and other establishments in the executive branch of the government and the Executive Office of the President. It also included independent regulatory agencies such as the Federal Trade Commission or the Consumer Product Safety Commission (44 U.S.C 350 (5)). AI in national security is governed by Section 10 (i) EO 13960.

13 AI was on the political agenda at the end of the 2010s and was defined in Section 238(g) of the John McCain National Defense Authorization Act for fiscal year 2019. The US Congress put in place a National AI advisory committee. The Office of Management and Budget (OMB) was required to issue guidance to federal agencies and to create an AI Center of Excellence within the General Services Administration (AI in Government Act, 2020). The Congress also adopted the AI Training Act (Artificial Intelligence Training for the Acquisition Workforce, PL 117-207) to help train federal employees on AI’s capabilities and risks and the Advancing American AI Act (James M. Inhofe National Defense Authorization Act for fiscal Yar 2023, PL 117-263 (Title LXXII, subtitle B) to encourage agency AI-related programs and initiatives to enhance US competitiveness in terms of innovation and entrepreneurship. The first steps were dedicated to building the general governance of AI with the National AI Advisory Committee and Center of Excellence and focused on the government and not on private companies, with the adoption of the AI in Government Act. Two executive orders were already adopted – Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence” 19 (2019) and EO 13960 “Promoting the use of Trustworthy Artificial Intelligence in the Federal Government” (2020), translating the ethical principles of the OECD recommendation – but they were partially implemented. This last one was updated by the NIST AI 100-1 for generative AI and the secure software development framework for generative AI and for dual-use foundation models.

14 At the time of writing this paper, Parliament’s IA strategy was not yet public. On 16 April 2024, the DG for Innovation and Technological Support (ITEC) from the European Parliament (EP) adopted the guidelines “Use of publicly available Artificial Intelligence Tools for Parliament Staff”(European Parliament 2024). We get access to this document on the basis of a request for access to public documents. Due to the lack of data, we mention the European Parliament in footnotes. We did not include the first EDPS orientations for ensuring data protection compliance when using Generative AI systems, published in June 2024, which focuses on data protection: https://www.edps.europa.eu/system/files/2024-06/24-06-03_genai_orientations_en.pdf (last consultation, 28 June 2024).

15 We got access to this document on the basis of a request for access to public documents.

16 Regarding the European Parliament guidelines, the document indicates three principles: (1) Non-disclosure and personal data protection which echoes the rule and principle 1 of the EC and CJEU; (2) Content responsibility (which is similar to rule 4 of the EC) and (3) Transparency and Compliance and (4) Autonomy and Business Continuity which is closed to rule 5 of the EC. The transparency and compliance principle offers the most distinctive dimension from the two other guidelines. It means that Parliament staff must always properly reference sources when making substantial use of generative AI tools, using the disclaimer “AI-assisted.” The idea of substantial use is defined as the action of “interpreting data analysis, generating draft legislation, developing hypotheses, etc.” It does not cover using generative AI as a basic author support tool.

17 The qualification of non-approved online available generative AI tools means that an intern body of ECJ can approve such tools and release the injunctions.

18 The work of the ITCB has already resulted in the publication of several reports that aim to develop a framework for AI’s good cybersecurity practices. On this subject see: https://www.enisa.europa.eu/topics/iot-and-smart-infrastructures/artificial_intelligence

19 In the EC level, the AI@EC network is in charge of identifying AI opportunities and use cases alongside with the potential risks and facilitating the exploration of new AI projects services and building and sharing knowledge in the AI area. In the ECJ, the “AI + Network” has for mission “to detect the areas in which AI tools will bring benefits to current activities.” They will also be “in charge of the prototypes and/or pilots designated to test the envisaged capabilities and to assess the benefits of their realization” (Court of Justice of the European Union, 2024, p. 20).

20 In the EC, the Interservice Steering Group on AI (ISG-AI) will coordinate and implement AI initiatives and oversee the AI@EC network activities. It will also set policies and frameworks and drafting operational guidelines for using AI in the commission (European Commission, 2024, p. 5). At the ECJ, the AI management board will be in charge of ensuring that the acquisition or creation of any AI tools respects ethical principles and fundamental rights. They will issue an ethical and fundamental rights charter that will be used as based to assess of any decision of acquisition or creation of AI tools. They would also define “red lines” in case of a high level of risks in the adoption of AI in certain types of business areas or with certain AI tools (Court of Justice of the European Union, 2024, p. 20).

21 We did not include the hypothesis of general-purpose AI model with systemic risks considering highly hypothetical that an European public administration develop such high-performance model.

22 Such a decision-making process could then also present an issue regarding the GDPR, art. 22.

23 In this sense, it is interesting to mention Italian case law that already recognises that the principle of transparency when confronting automated decision-making should ensure full knowledge of the existence of and the substance of algorithms employed in the case. To more on this issue, see: Galetta and Pinotti (Reference Galetta and Pinotti2023). Automation and Algorithmic Decision-Making Systems in the Italian Public Administration. CERIDAP, 2023(1), 13–23.

References

Baily, M. N., Brynjolfsson, E., & Korinek, A. (2024). Machines of mind: The case for an AI-powered productivity boom. https://www.brookings.edu/articles/machines-of-mind-the-case-for-an-ai-powered-productivity-boom/Google Scholar
Beaulieu, A., & Leonelli, S. (2022). Data and society: A critical introduction. Sage Publications.Google Scholar
Bell, J. S. (2019). Comparative administrative law. In Reimann, M. & Zimmermann, R. (Eds.), The Oxford handbook of comparative law (pp. 12511275). Oxford University Press.Google Scholar
Black, J. (2012). Paradoxes and failures: ‘New governance’ techniques and the financial crisis. Modern Law Review, 75(6), 10371063. https://doi.org/10.1111/j.1468-2230.2012.00936.xCrossRefGoogle Scholar
Black, J., & Douglas Murray, A. (2019). Regulating AI and machine learning: Setting the regulatory agenda. European Journal of Law and Technology, 10(3). https://ejlt.org/index.php/ejlt/article/view/722/980Google Scholar
Bright, J., Enock, F. E., Esnaashari, S., Francis, J., Hashem, Y., & Morgan, D. (2024). Generative AI is already widespread in the public sector. https://arxiv.org/abs/2401.01291Google Scholar
Cantens, T. (2024). How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations. AI and Society, 112. https://doi.org/10.1007/s00146-023-01840-9Google Scholar
Citron, D. K. (2007). Technological due process. Washington University Law Review, 85, 12491313. https://journals.library.wustl.edu/lawreview/article/id/6697/Google Scholar
Coglianese, C., & Lazer, D. (2003). Management-based regulation: Prescribing private management to achieve public goals. Law & Society Review, 37(4), 691730. https://doi.org/10.1046/j.0023-9216.2003.03703001.xCrossRefGoogle Scholar
Coglianese, C., & Starobin, S. (2020). Management-based regulation. In Richards, K. R. & Van Zeben, J. (Eds.), Policy instruments in environmental law. Edward Elgar.Google Scholar
Court of Justice of the European Union. (2024). Artificial intelligence strategy. Directorate-General Information. https://ieu-monitoring.com/editorial/artificial-intelligence-the-strategy-of-the-eu-court-of-justice/443217?utm_source=ieu-portalGoogle Scholar
Court of Justice of the European Union. (n.d.). Staff guidelines on the use of non-approved online available generative Artificial Intelligence (AI) tools. Document obtained through a requisition based on the right to access documents. Request n. 0020/2023D from 02.12.2023.Google Scholar
Do, B. (2023). A blueprint for the future: White House and states issue guidelines on AI and generative AI. https://fpf.org/blog/a-blueprint-for-the-future-white-house-and-states-issue-guidelines-on-ai-and-generative-ai/Google Scholar
Dunleavy, P., & Margetts, H. (2023). Data science, artificial intelligence and the third wave of digital era governance. Public Policy and Administration. https://doi.org/10.1177/09520767231198737CrossRefGoogle Scholar
Edler, J. (2024). Technology sovereignty of the EU: Needs, concepts, pitfalls and ways forward. https://doi.org/10.24406/publica-3394CrossRefGoogle Scholar
Engstrom, D. F., & Ho, D. E. (2020). Algorithmic accountability in the administrative state. Yale Journal on Regulation, 37. http://hdl.handle.net/20.500.13051/8311.Google Scholar
European Commission. (2023). Guidelines for staff on the use of online available generative artificial intelligence tools. Document obtained through a requisition based on the right to access documents. Request n.2024/1925 from 08/04/2024.Google Scholar
European Commission. (2024). Communication to the Commission, artificial intelligence in the European Commission (AI@EC) A strategic vision to foster the development and use of lawful, safe and trustworthy: Artificial Intelligence systems in the European Commission. (C(2024) 380 final. Brussels. https://commission.europa.eu/publications/artificial-intelligence-european-commission-aiec-communication_enGoogle Scholar
European Data Protection Supervisor. (2024). The first EDPS orientations for ensuring data protection compliance when using Generative AI systems. https://www.edps.europa.eu/data-protection/our-work/publications/guidelines/2024-06-03-first-edps-orientations-euis-using-generative-ai_enGoogle Scholar
European Parliament. (2024). Guidelines use of publicly available Artificial Intelligence tools for Parliament staff. Document obtained through a requisition based on the right to access documents.Google Scholar
Exec. Order No. 14110, 88 Fed Reg 75191. (2023). Safe, secure, and trustworthy development and use of artificial intelligence. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligenceGoogle Scholar
Finck, M. (2020). Automated decision-making and administrative law. In Cane, P., Hofmann, H. C. H., Ip, E. C., & Lindseth, P. L. (Eds.), The Oxford handbook of comparative administrative law. Oxford University Press. Max Planck Institute for Innovation & Competition Research Paper No. 19-10 https://ssrn.com/abstract=3433684.Google Scholar
Fisher, E., & Shapiro, S. A. (2020). Administrative competence: Reimagining administrative law. Cambridge University Press.CrossRefGoogle Scholar
Floridi, L. (2023). AI as agency without intelligence: On ChatGPT, large language models, and other generative models. Philosophy and Technology, 36(1). https://doi.org/10.1007/s13347-023-00621-yCrossRefGoogle Scholar
Galetta, D. U., & Pinotti, G. (2023). Automation and algorithmic decision-making systems in the Italian public administration. CERIDAP, 2023(1), 1323.Google Scholar
Hofmann, H. C. (2014). General principles of EU law and EU administrative law. In Peers, S. & Barnard, C. (Eds.), European Union Law (2nd ed., pp. 198226). Oxford, United Kingdom: Oxford University Pressq.Google Scholar
H.R.2575 - 116th Congress (2019-2020): AI in Government Act of 2020. (2020, septembre 15 ). https://www.congress.gov/bill/116th-congress/house-bill/2575 2020Google Scholar
IBM Newsroom. (2024). IBM and the government of Spain collaborate to advance national AI strategy and build the world’s leading Spanish language AI models. https://newsroom.ibm.com/2024-04-05-IBM-and-The-Government-of-Spain-Collaborate-to-Advance-National-AI-Strategy-and-Build-the-Worlds-Leading-Spanish-Language-AI-ModelsGoogle Scholar
Kankanhalli, A., Charalabidis, Y., & Mellouli, S. (2019). IoT and AI for smart government: A research agenda. Government Information Quarterly, 36(2), 304309. https://doi.org/10.1016/j.giq.2019.02.003CrossRefGoogle Scholar
Koller, R. (2024). Comment a été développée Albert, l’IA générative de l’Etat français’ [How Albert was developed, generative AI and the French State]. ICT Journal. https://www.ictjournal.ch/news/2024-04-26/comment-a-ete-developpee-albert-lia-generative-de-letat-francaisGoogle Scholar
Margetts, H. (1995). The automated state. Public Policy and Administration, 10(2), 88103.CrossRefGoogle Scholar
McKinsey & Company. (2024). Deploying generative AI in US state governments: Pilot, scale, adopt. https://www.mckinsey.com/industries/public-sector/our-insights/deploying-generative-ai-in-us-state-governments-pilot-scale-adoptGoogle Scholar
Meinhardt, C., Lawrence, C. M., Gailmard, L. A., Zhang, D., Bommasani, R., Kosoglu, P. H., Russel, W., & Ho, D. E. (2023). By the numbers: Tracking the AI executive order. https://hai.stanford.edu/news/numbers-tracking-ai-executive-orderGoogle Scholar
Mettler, T. (2019). The road to digital and smart government in Switzerland. In Ladner, A., Soguel, N., Emery, Y., Weerts, S., & Nahrath, S. (Eds.), Swiss public administration: Making the state work successfully (pp. 175186). Palgrave Macmillan.CrossRefGoogle Scholar
Metzger, G. E. (2014). Administrative law, public administration, and the administrative conference of the United States. The George Washington Law Review, 83, 15171539.Google Scholar
Napolitano, G. (2020). The rule of law. In Cane, P., Hofmann, H. C. H., Ip, E. C., & Lindseth, P. L (Eds.), The Oxford handbook of comparative administrative law (p. ). Oxford.Google Scholar
Office of Management and Budget. (2024). Memorandum for the heads of executive departments and agencies, M-24-10, advancing governance, innovation, and risk management for agency use of artificial intelligence. https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdfGoogle Scholar
Office of Personnel Management. (n.d.). Responsible use of generative artificial intelligence for the federal workforce. https://www.opm.gov/data/resources/ai-guidance/Google Scholar
Pasquale, F., & Malgieri, G. (2024). Generative AI, explainability, and score-based natural language processing in benefits administration. Journal of Cross-Disciplinary Research in Computational Law, 2(2). Retrieved from https://journalcrcl.org/crcl/article/view/59.Google Scholar
Piotrowski, S. J., Berliner, D., & Ingrams, A. (2022). The power of partnership in open government: Reconsidering multistakeholder governance reform. MIT Press.CrossRefGoogle Scholar
Ranchordas, S 2021. Experimental Regulations and Regulatory Sandboxes: Law without Order?. SSRN Journal, https://doi.org/10.2139/ssrn.3934075Google Scholar
Ranchordas, S and Vinci, V. (2024). Regulatory Sandboxes and Innovation-friendly Regulation: Between Collaboration and Capture. SSRN Journal, https://doi.org/10.2139/ssrn.4696442CrossRefGoogle Scholar
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) text with EEA relevance.Google Scholar
Sowell, S. F. (2024). California seeking generative AI ideas to help lighten traffic jams. StatesCoop. https://statescoop.com/california-traffic-jams-generative-ai/Google Scholar
State of New Jersey. (2023). Executive order n. 346. https://nj.gov/infobank/eo/056murphy/pdf/EO-346.pdfGoogle Scholar
State of Oklahoma 2023 Executive Order 2023 -24, 25 September available at: https://www.sos.ok.gov/documents/executive/2084.pdf (last time consulted 21 May)Google Scholar
State of Oregon. (2023). In EO n. 23-26. Stablishing a state government artificial intelligence advisory council. https://www.oregon.gov/gov/eo/eo-23-26.pdfGoogle Scholar
State of Pennsylvania. (2023). Expanding and governing the use of generative artificial intelligence technologies within the commonwealth of Pennsylvania. https://www.oa.pa.gov/Policies/eo/Documents/2023-19.pdfGoogle Scholar
Valli Buttow, C., & Weerts, S. (2022). Open government data: The OECD’s Swiss Army knife in the transformation of government. Policy & Internet, 14(1), 219234. https://doi.org/10.1002/poi3.275CrossRefGoogle Scholar
World Economic Forum, (2024). Jobs of Tomorrow: Large Language Models and Jobs, White Paper. Available at: https://www.weforum.org/publications/jobs-of-tomorrow-large-language-models-and-jobs/, Available at.Google Scholar
World Government Summit and Accenture. (2024). Generative AI & Government: How can government agencies responsibility navigate the AI landscape to implement high impact generative solutions? In. Retrieved from https://www.worldgovernmentsummit.org/observer/reports/2024/detail/generative-ai-government-how-can-government, last time consulted 16 July.Google Scholar
Yeung, K. (2022). The new public analytics as an emerging paradigm in public sector administration. Tilburg Law Review, 27(2), 132. https://doi.org/10.5334/tilr.303CrossRefGoogle Scholar
Figure 0

Figure 1. Activities of the public administration.

Figure 1

Figure 2. Scenarios.

Figure 2

Figure 3. Regulatory toolbox.