Hostname: page-component-68c7f8b79f-fc4h8 Total loading time: 0 Render date: 2025-12-21T00:49:06.742Z Has data issue: false hasContentIssue false

Regulatory sandboxes for AI in the majority world: A learning-centric approach to legal adaptation

Published online by Cambridge University Press:  10 December 2025

Armando Guio Español*
Affiliation:
Berkman Klein Center for Internet & Society, Harvard University, Cambridge, MA, USA
Pascal D. Koenig
Affiliation:
Department of Political Science and Public Administration, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
*
Corresponding author: Pascal D. Koenig; Email: p.d.koenig@vu.nl
Rights & Permissions [Opens in a new window]

Abstract

Regulatory sandboxes for Artificial Intelligence (AI) are designed to address challenges of rapid technological change. AI innovations create an acute need for learning about what regulation is suitable for enabling innovation while dealing with technological risks. This article argues that regulatory sandboxes should be analyzed primarily as mechanisms for enhancing policymakers’ understanding of technologies such as AI, rather than solely as spaces for experimentation that promote innovation. It discusses the role of regulatory sandboxes in facilitating policy learning that can complement the long-term learning processes of the traditional policy cycle. Six case studies serve to illustrate sandbox elements for enabling collaborative experiential learning in contexts in which the absence of AI regulation makes accelerated policy learning particularly valuable. Looking at the design and governance of regulatory sandboxes from Brazil, Colombia, Mauritius, Mexico, Rwanda, and Thailand, learning elements related to the technology and consequences for closing legal lags emerge as critical components.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction

A fast pace of technological innovation can lead to reduced legal certainty and legal lags as existing rules no longer suit the state of technological possibilities and risks. In recent years, particularly advances in Artificial Intelligence (AI) have created major challenges for policymakers and regulators who seek to balance promoting innovation in AI with guaranteeing adequate safety from risks linked to AI uses. This article explores the potential of regulatory sandboxes as a tool for experimental regulation to address the challenges posed by advances in AI. It examines their role in facilitating policy learning, complementing the learning processes that are inherent in traditional policymaking. Although scholarly and practitioner accounts of regulatory sandboxes point to several possible objectives of such sandboxes (e.g., Allen, Reference Allen2019; BMWi, 2019; Buocz, Pfotenhauer & Eisenberger, Reference Buocz, Pfotenhauer and Eisenberger2023; Ranchordás, Reference Ranchordás2021a, Reference Ranchordás2021b), promoting innovation is commonly the first objective and features as the key standard of success of sandboxes (Ruschemeier, Reference Ruschemeier and Steffen2025: p. 320). Regulatory sandboxes count as an ‘innovation-friendly’ form of regulation (Ranchordás & Vinci, Reference Ranchordás and Vinci2024). In the European Union (EU) AI Act, they also feature under a chapter that is called ‘Measures in Support of Innovation’. However, this article emphasizes the importance of viewing regulatory sandboxes primarily as a tool for enhancing the design and implementation of AI governance measures based on learning processes within a sandbox.

It is this procedural dimension of sandboxes that is essential for closing legal lags – whereas innovation is a highly contingent substantive outcome. Promoting learning processes through regulatory sandboxes is also particularly important for the governance of AI systems. AI is a moving target that creates considerable regulatory uncertainty and makes future-oriented regulation acutely relevant and valuable (Ruschemeier, Reference Ruschemeier and Steffen2025, pp. 318–19). A central promise of sandboxes is that they enable regulators to better deal with challenges of rapid technological change. They can help to govern emerging technologies through making regulation more agile and adaptive as a response to conditions marked by volatility, uncertainty, complexity, and ambiguity (see, e.g., Armstrong, Gorst & Rae, Reference Armstrong, Gorst and Rae2019; Marjosola, Reference Marjosola2021; Tan & Taeihagh, Reference Tan and Taeihagh2021). With this contribution to adaptive regulation and anticipatory governance (Guston, Reference Guston2014), regulatory sandboxes can provide answers to two important challenges that arise with advances in AI. The uncertainty surrounding AI innovations underscores the importance of (a) proactive learning and the rapid generation of evidence while (b) information asymmetries heighten the need for robust collaboration among stakeholders.

First, regarding the relevance of proactive learning processes, AI introduces uncertainty about the most appropriate regulatory approach, as it is a general-purpose technology with potential benefits and risks across various domains, while its capabilities and associated risks continue to evolve (Taeihagh, Reference Taeihagh2025). Most existing laws were not designed to accommodate the novel solutions enabled by AI systems. For instance, healthcare laws that mandate direct patient contact for diagnosis may inadvertently hinder the adoption of AI-driven diagnostic tools, as they would automatically fall under regulations governing telediagnosis (Buocz et al., Reference Buocz, Pfotenhauer and Eisenberger2023, p. 374). As a result, existing regulations may impede beneficial and desirable applications of AI. At the same time, unanticipated behaviours of AI systems, such as deception by large language models (Park, Goldstein, O’Gara, Chen & Hendrycks, Reference Park, Goldstein, O’Gara, Chen and Hendrycks2024), highlight the need for rigorous testing to identify regulatory gaps and ensure appropriate safeguards. A second challenge to which regulatory sandboxes potentially respond is that AI-based systems are often opaque and lead to significant information asymmetries between their developers and other stakeholders, including consumers and policymakers. To address this asymmetry, an effective AI governance framework needs to include mechanisms that enhance the collective understanding of AI across its various manifestations and contexts of application (Gasser & Almeida, Reference Gasser and Almeida2017).

Regulatory sandboxes can play a crucial role in addressing the two described challenges by institutionalizing the testing of both emerging technologies and regulatory frameworks (Guio, Reference Guio2024) while fostering collaboration among diverse stakeholders (Fahy, Reference Fahy2022; Ranchordás & Vinci, Reference Ranchordás and Vinci2024). As Sunstein (Reference Sunstein2022) suggests, this approach is particularly valuable when regulating highly complex and technical domains such as AI. Consequently, regulatory sandboxes are a valuable tool for regulators, especially in jurisdictions with limited access to scientific expertise and for more advanced AI systems. Emphasizing the dimension of learning, we argue that regulatory sandboxes for AI can serve as a mechanism to complement the inherently slow learning processes within regulatory bodies and the traditional policy cycle. This can occur in two ideal-typical ways. First, regulatory sandboxes can operate within the framework of already-existing AI-specific regulations. This approach aligns with the EU model of AI governance, where sandboxes provide a controlled space for testing innovation, particularly of high-risk AI systems, within an otherwise stringent regulatory environment.

Second, and the focus of this article, is the use of regulatory sandboxes in the absence of comprehensive AI regulation. When no overarching AI-specific rules are in place, sandboxes become particularly valuable as tools for evidence gathering and iterative learning. By enabling experimentation before formal regulatory decisions are made, they help inform ex ante analysis and guide the development of future regulatory frameworks (Sunstein, Reference Sunstein2022). Prioritizing this perspective leads to a sandbox design that emphasizes the learning process over innovation. The present article thus also speaks to debates about the relationship between regulatory sandboxes and innovation, which has been widely debated and criticized in discussions on regulatory sandboxes (e.g., Peirce, Reference Peirce2018). To explore this second type of regulatory sandbox, we examine several cases from non-EU countries. Specifically, we analyze the design and governance of six regulatory sandboxes in Colombia, Brazil, Mauritius, Mexico, Rwanda, and Thailand, evaluating key elements that facilitate learning about AI technologies and their regulatory implications. These cases demonstrate how sandboxes can be structured to reduce uncertainty and mitigate information asymmetries in technical knowledge. Moreover, they illustrate how regulatory learning has become a priority for authorities in various jurisdictions, not only as a means of refining policy responses but also as a strategy for enhancing institutional knowledge and regulatory capacity.

The global perspective adopted below complements a current focus on EU regulation in the literature. This focus is understandable in view of the EU being the first adopter of a comprehensive and binding regulation specifically for AI, which also includes provisions for regulatory sandboxes. Yet, the EU AI regulation has also been criticized on various grounds – such as its emphasis on industry self-certification, weak requirements for transparency of AI, and the governance structure it establishes (e.g., Almada & Petit, Reference Almada and Petit2025; Busuioc, Curtin & Almada, Reference Busuioc, Curtin and Almada2023; Wörsdörfer, Reference Wörsdörfer2024) – with criticisms extending to the way in which the AI Act governs regulatory sandboxes (Buocz et al., Reference Buocz, Pfotenhauer and Eisenberger2023; Truby, Brown, Ibrahim & Parellada, Reference Truby, Brown, Ibrahim and Parellada2022). The analysis below can help to broaden current debates through pointing to approaches toward the experimental regulation of AI beyond the EU.

The article is structured as follows. Section 2 reviews the role of regulatory sandboxes for promoting innovation before developing a perspective that centres on policy learning as an objective of these sandboxes. The third section describes the selection of cases used to illustrate the role of learning in regulatory sandbox design and governance and provides an analysis of the cases based on extracted key themes. Section 4 concludes with a summary and an outlook.

2. Regulatory sandboxes: from promoting innovation to accelerating learning

2.1. Regulatory sandboxes as innovation incubators

We follow Johnson (Reference Johnson2022, p.4) in understanding regulatory sandboxes as ‘a distinct type of regulatory instrument within the larger categories of flexible, experimentalist, and anticipatory modes of governance’. They are characterized by criteria for entry, restricted discretion by the regulator, rules for risk management, an orientation towards certain substantive success criteria, and a structured exchange between regulators and the objects of regulation (Johnson, Reference Johnson2022, pp. 5–7). Regulatory sandboxes are an increasingly widespread form of regulatory experimentation that governments employ for a temporary live testing of technological innovations under increased oversight.

This form of experimental regulation has been gaining lots of momentum within less than a decade. After the UK’s Financial Conduct Authority established the first regulatory sandbox for financial technologies (Fintech) in 2015, many regulators over the globe have embraced this instrument to test technological innovation and gain evidence that can inform policymaking. Whereas a World Bank report from 2020 covered experiences from 73 fintech sandboxes in 57 countries (2020), a study by Markellos et al. (Reference Markellos, Ennis, Enstone, Manos, Pazaitis and Psychoyios2024) mapped 199 cases across 92 countries, covering various sectors such as telecommunications, energy, and AI. These numbers are likely to increase further especially with an increasing interest in regulatory sandboxes for AI.

Major possible beneficial impacts of these sandboxes are promoting innovation, greater legal certainty for businesses, and increased knowledge for policymakers, possibly leading to adapted regulation (Ranchordás, Reference Ranchordás2021b). Policymakers place strong hopes in regulatory sandboxes in view of their potential to ‘improve the regulatory governance of innovation and accelerate the deployment of innovative solutions’ (Ranchordás & Vinci, Reference Ranchordás and Vinci2024, p. 2). It is notable that references to the objectives of regulatory sandboxes commonly mention innovation and access to markets first (e.g., Buocz et al., Reference Buocz, Pfotenhauer and Eisenberger2023; OECD, 2023; Ranchordás, Reference Ranchordás2021b), whereas learning and producing insights as an important aspect of sandboxes seem to be subordinate (see also Ruschemeier, Reference Ruschemeier and Steffen2025). This is also palpable in statements that frame learning within a regulatory sandbox as instrumental for fostering innovation. For instance, as Nabil (Reference Nabil2024, p. 244) notes: ‘At their best, regulatory sandboxes can promote technological innovation by attracting innovative companies and helping policymakers design an evidence-based, iterative approach to regulating emerging technologies’. In this quote, producing evidence is not so much a goal in itself, but rather a means to supporting innovation.

A focus on the objective of promoting innovation and business creation is also recognizable in empirical research interested in the effects of regulatory sandboxes. The jury is still out, however, on whether sandboxes actually realize this objective, with available evidence still being scarce and showing a mixed picture. An evaluation of the fintech sandbox created in the UK finds that the sandbox increased the capital raised by participants by 15% while the probability to increase capital increased by 50% (Cornelli et al., 2024). Another study on the same regulatory sandbox, however, concludes that this regulatory instrument has negatively impacted the financial performance of digital banks due to higher compliance and efficiency costs (Washington, Rehman & Lee, Reference Washington, Rehman and Lee2022). Hellmann et al. (Reference Hellmann, Montag and Vulkan2024) also revisited the previous analysis of the UK fintech sandbox and showed that when using statistical estimation techniques that account for sandboxes having heterogeneous effects on acquired funding, the overall positive relationship is attributable to a single sandbox. In their model, specifying heterogeneous treatment effects, the overall contribution of regulatory sandboxes to raising capital becomes statistically insignificant. At the same time, the authors do point to indirect effects of a sandbox on the broader fintech ecosystem, as the entry into a sandbox seems to increase the funds raised by other high-growth companies operating in the same industry sector (Hellmann et al., Reference Hellmann, Montag and Vulkan2024).

Mixed evidence also results from further studies that have looked at more than one country. A study comparing the fintech sandboxes in the UK and in Singapore, in turn, yields no clear evidence that they raised more capital in the first 2 years (Chen, Reference Chen, Fenwick, Vyn Uytsel and Ying2020). A cross-country analysis with overall 18 countries finds that regulatory sandboxes have led to an increased influx of venture capital into the fintech venture ecosystem (Goo & Heo, Reference Goo and Heo2020). The authors arrived at this finding by matching nine countries that have introduced a regulatory sandbox with nine countries that had no sandbox based on similar National Competitiveness Index scores. Evidence on regulatory sandboxes is furthermore limited because it comes from the realm of fintech. An important exception is the study by Beckstedde et al. (Reference Beckstedde, Correa Ramírez, Cossent, Vanschoenwinkel and Meeus2023), which analyzes the experiences from regulatory sandboxes in the energy sectors of eight European countries. The authors conclude that regulatory derogations used in the studied sandboxes ‘validate the idea of using regulatory sandboxes to promote innovation’. Furthermore, the authors argue based on their findings that the regulatory scope of sandboxes should be as open as possible in order to best realize the goal of promoting innovation.

Overall, it appears that promoting innovation and new business models is the ultimate standard by which the success of sandboxes is judged. If they fail to foster innovation, they do not appear to merit the effort. However, promoting innovation is a difficult standard since innovation is ‘an elusive concept which is hard to define, measure, and thus regulate’ (Ranchordás & Vinci, Reference Ranchordás and Vinci2024, p. 15). Given that it is uncertain how to reliably support innovation, it seems unclear how a regulatory sandbox can contribute to it in a targeted, predictable fashion. At the same time, designing the sandbox to foster innovation may invite concerns about regulators being involved in the innovation process themselves. A notable example of this is how certain authorities in the United States have interpreted this methodology. In 2018, Securities and Exchange Commissioner Hester Peirce expressed concerns regarding the design and implementation of regulatory sandboxes, particularly the idea of regulatory authorities actively participating in the innovation process. According to Peirce, sandboxes risk becoming spaces where regulators exert influence over the design of a technology, thereby limiting innovation and restricting the ability to propose new models (Pierce, Reference Peirce2018). A related critical assessment of regulatory sandboxes comes from Germany’s Federal Financial Supervisory Authority (BaFin), which has argued against the use of sandboxes because their goal of promoting innovation would not fall under their mandate and would not require the authority’s involvement (Ruschemeier, Reference Ruschemeier and Steffen2025, p. 328).

However, these perspectives present sandboxes as a way to create a space in which the design of a technology and the innovation process itself is debated. For instance, in the case of AI, this interpretation assumes that regulatory authorities would play a direct role in shaping the operation or training of these models. This is hardly a suitable mandate for regulators. Indeed, regulatory authorities do not possess the needed expertise and technical capabilities to engage in such interactions. In fact, regulators are struggling to keep pace with the rapid evolution of technology. Rather than seeking to shape innovation directly, many authorities view regulatory sandboxes as mechanisms for learning – allowing them to better understand emerging technologies and their implications for regulatory frameworks.

2.2. Regulatory sandboxes as a tool to accelerate learning within the policy cycle

The design and governance of a sandbox cannot guarantee that it leads to more innovation. A regulatory sandbox can, however, be systematically created and operated in a way that it contributes to learning processes which can help to adapt to changing conditions, specifically through technological developments. Regulatory sandboxes are a form of adaptive or agile regulation that is attuned to conditions of ongoing and rapid technological change. Such agile regulation is proactive through anticipating possible consequences, it is responsive through updating rules quickly and dynamically, and it is based on the systematic use of evidence, which enables adaptivity and learning (Allen, Reference Allen2019; Guston, Reference Guston2014; Howlett et al., Reference Howlett, Capano and Ramesh2018; Ranchordás, Reference Ranchordás2021a, Reference Ranchordás2021b). In this way sandboxes can help with evaluating the suitability of legal frameworks, save time through advancing learning processes, and inform policymaking (BMWi, 2019). Emphasizing this adaptive dimension of regulatory sandboxes means shifting the focus from promoting innovation to fostering learning processes through sandboxes.

Empirical evidence on the impacts of regulatory sandboxes in terms of the learning they engender is still very scarce, but available case studies indicate that these regulatory tools increase knowledge and capacities. Based on their interviews from regulators involved in sandboxes from five jurisdictions, Alaassar et al. (Reference Alaassar, Mention and Aas2020) conclude that the interaction between regulators and regulates improves their understanding of the constraints of existing regulation, technological risks, and regulatees’ need for support. Macrae and Ansell (Reference Macrae and Ansell2024) further conclude from their ethnographic case study of a regulatory sandbox for AI applications in medical diagnostic services implemented in the UK that the sandbox establishes a generative space for collaborating, learning, and innovation. The authors stress that the regulatory sandbox created an adaptive space for exploring and learning and that the commitment to learning was rooted in an awareness of organizational ignorance and a need to close regulatory knowledge gaps (Macrae & Ansell Reference Macrae and Ansell2024, pp. 20–21).

The flexible, dynamic, and open approach of a sandbox is diametrically opposed to the idea that policymakers and regulators are generally in a position to set rules which are suitable for governing technological innovations. Rather, regulatory sandboxes as a form of agile regulation are an expression of ‘regulatory humility’ as described by Dunlop and Radaelli (Reference Dunlop and Radaelli2016), i.e., the recognition of the contingency of outcomes following from regulatory choices and a reflection on the limits of policymakers’ and regulators’ control. As Haeffele and Hobson (Reference Haeffele and Hobson2019, p. xiii) note, policies can fall short of realizing the intended effects due to an unacknowledged contingency and complexity of problems, making humility in regulation a healthy and prudent approach. This also means that regulators and policymakers must themselves embrace the role of learners to make the experimenting mode of regulatory sandboxes work – which can be unfamiliar and a source of discomfort among regulators (Macrae & Ansell Reference Macrae and Ansell2024, p. 21).

Unlike the objective of promoting innovation, emphasizing the importance of learning is vital in navigating complexity, uncertainty, and volatility, as such conditions necessitate quicker learning and broader exploration of options. Technological innovations with major transformative impacts on society can significantly contribute to conditions that are volatile and marked by increased uncertainty. These conditions present special challenges for regulators, as they will lack knowledge regarding how innovations will affect markets and society. Technological innovations may also span different areas and blur lines between markets and sectors (e.g., the automotive industry and mobility services), and the high pace of change may quickly make extant regulatory definitions and categories become obsolete. The governance of emerging technologies thus calls for more flexibly responding to regulatory problems that these technologies may pose (Stilgoe et al., Reference Stilgoe, Owen and Macnaghten2013). To develop sound policies, as Sunstein (Reference Sunstein2022) notes, regulators must go beyond relying solely on expert judgment and instead prioritize systematic learning and real-time evidence gathering. The objective should be to assess the real-world impact of policies in advance, rather than retrospectively, allowing for more adaptive and informed decision-making (Sunstein, Reference Sunstein2022).

These considerations apply particularly to the regulation of AI systems. AI is a general-purpose technology that can lead to new opportunities and risks in manifold areas and across different sectors (Ruschemeier, Reference Ruschemeier2023). The complexity and opacity of AI applications leads to information asymmetries, with regulators struggling to understand the implications of AI uses. Innovations in AI have also been occurring at a fast pace in recent years, leading to uncertainty about the suitability of regulation and possibly legal gaps. They thus create an acute need for learning and gaining a better understanding whether and how certain rules may or may not be suitable – as they can hinder certain socially beneficial innovations or may leave important novel sources of risk unaddressed.

Two important characteristics of regulatory sandboxes are particularly important for facilitating the kind of learning that can help to close legal gaps regarding AI regulation. First, through establishing collaboration between stakeholders, they can contribute to reducing information asymmetries. Close interactions between private sector actors and regulators can play an important role in meeting information needs of regulators, reducing knowledge asymmetries and fostering the necessary understanding of technologies (Fahy, Reference Fahy2022; Parenti, Reference Parenti2020). As Sunstein (Reference Sunstein2022) highlights, when regulators face significant gaps in information – often referred to as ‘the knowledge problem’ – they can benefit from tools that harness expertise from the private sector. A well-designed regulatory system should aim to bridge this gap, including through mechanisms of advanced testing to generate insights before formal regulatory decisions are made.

Some scholars have been vocal about the potential use of sandboxes as instruments of regulatory capture (e.g., Ranchordás & Vinci, Reference Ranchordás and Vinci2024). However, the status quo is already one of a concentration of technical knowledge that puts regulators at a disadvantage and may create an environment that AI developers actively seek. Specifically, under circumstances that can be described as too complex to regulate (Neuman, Reference Neuman2023), the result may well be limited regulatory oversight due to the inherent complexity of the technology and the knowledge asymmetries between innovators and regulators. Furthermore, the concrete governance of a regulatory sandbox is crucial for preventing undue industry influence. Openness and actively engaging various stakeholders, for instance, can reduce the risk of regulatory capture (Parenti, Reference Parenti2020). Ultimately, to achieve their intended impact, it is essential to design and implement regulatory sandboxes effectively, ensuring they facilitate meaningful learning and robust collaboration. Their design determines the extent to which they can mitigate risks, including consumer harms, regulatory capture, and market distortions, ultimately shaping the effectiveness of regulatory adaptation.

A second characteristic of regulatory sandboxes that is crucially important for promoting learning is that they can involve proactive testing and trialling not only of technology, but also of rules for regulating the technology. Again, the extent to which a sandbox is geared towards such experimentation is a question of its concrete design and governance (Bijkerk, Reference Bijkerk2021). Through directly supervising the trialling of technology in a controlled setting, regulators can gain hands-on experience and knowledge necessary to evaluate how certain rules may be suitable or not to govern AI applications. Within a sandbox, it becomes possible to test technology and also to experiment with and adapt rules, gaining insights about their suitability. In this regard, regulatory sandboxes have strong parallels to the long-standing practice of clinical trials (Buocz et al., Reference Buocz, Pfotenhauer and Eisenberger2023) and to real-world laboratories that have been used in research stressing experimentation, evaluation, and learning (Bergmann et al., Reference Bergmann, Schäpke, Marg, Stelzer, Lang, Bossert, Gantert, Häußler, Marquardt, Piontek, Potthast, Rhodius, Rudolph, Ruddat, Seebacher and Sußmann2021).

The characteristics described above make regulatory sandboxes different from traditional regulation following the policy cycle. The long-term horizon of learning in the policy cycle and the idea of at least somewhat rational and linear policy change are at odds with the open learning process of regulatory sandboxes in which regulatory frameworks are tested and experimented with. Traditional regulation also tends to lag behind technological developments as it is based on the given state of knowledge, and updating regulation will only occur after waiting and seeing what the impacts of technologies are. As Ringe and Ruof (Reference Ringe and Ruof2020, p. 617) note ‘[i]n regular circumstances, the regulator or legislature needs to assess risks on the market. By the time the findings of this process reach the legislative stage, the stakes are usually high, or risks are already on the verge of materializing’.

Regulation along the lines of the model of the common policy cycle thus easily fails to accommodate the pace of technological change in certain instances. While policy learning does occur via the policy cycle, it may simply be too slow. Regulatory sandboxes establish a learning process within a very different setting and timeframe. Yet, although they do not conform to the policy cycle, they can, however, very much complement the traditional policy cycle by accelerating learning processes within it. Regulatory sandboxes essentially advance the process of arriving at new relevant insights, rather than waiting for impacts to show and taking them as an occasion to update existing or create new rules. Regulatory sandboxes can thus operate as policy learning accelerators that form an integral part of the policy cycle.

When gaining insights about the suitability of regulation, an important factor concerning AI applications is that there will commonly not exist any binding rules specifically for regulating AI. To date, only a few jurisdictions, like the EU or China, have adopted binding legislation concerning the development, deployment, and use of AI systems. Depending on which circumstances hold true, regulatory sandboxes for AI can complement the policy cycle in two different ways.

First, regulatory sandboxes can be used to complement existing binding rules concerning AI. In this case, they are an instrument that introduces flexibility and adaptivity to compensate for the rigidity of extant binding regulation. This scenario corresponds to the European Union’s approach to adopting regulatory sandboxes for AI. Article 57 of the EU AI Act specifies what kind of learning is envisaged with these sandboxes. The learning object is evident in one of the intended main outcomes of these sandboxes: the production of results reports that present the key learning outcomes from each initiative. Notably, paragraph 9 of the same Article 57 states that learning is not only intended for entrepreneurs or participating tech companies but also for regulatory authorities. This is emphasized by the statement that one of the objectives of sandboxes created under this article is ‘evidence-based regulatory learning’ (own emphasis).

The European Directorate-General for Research and Innovation defines regulatory learning as the collection and use of any evidence or knowledge relevant to current or future regulatory policy, generated in the process of experimenting with an innovative solution (Molinari, Rachinger, Kordel, Hennen & Van Roy, Reference Molinari, Rachinger, Kordel, Hennen and Van Roy2022). The collection of this type of evidence has been made explicit in the process of designing sandboxes in Europe, such as in the cases of Spain and Germany. The AI sandbox project led by the Secretariat of State for Digitalization and Artificial Intelligence in Spain has emphasized the importance of practical learning so that authorities can support the development of standards, guidance, and tools at both the national and European levels (SEDIA, 2025). As a regulatory instrument that is aligned with the Spanish digital strategy, España Digital 2025, the AI sandbox is a standardized framework and controlled testing environment ‘designed to support innovation while ensuring compliance with the AI Act, particularly focusing on high-risk AI systems’ (Lumsa, 2025).

In Germany, efforts have focused on the development of experimentation clauses that facilitate learning through sandbox environments. The objective is to balance the ability to experiment with compliance with existing legal frameworks, thus achieving regulatory flexibility that makes sandboxes functional and efficient (Otter, Reference Otter2024). The German government has made the development of experimentation clauses a priority within its sandbox policy, emphasizing their role in enabling legislators to gain early insights into innovations, assess their real-world impacts, and identify appropriate legal frameworks. These insights, in turn, support the evolution of broader regulatory systems (BMWK, 2024). Notably, German authorities stress that experimentation clauses serve a dual purpose: (i) facilitating regulatory experimentation and (ii) promoting regulatory learning (BMWK, 2024, p. 11).

Under the EU regulatory framework, it is thus possible to achieve flexibility that sustains regulatory learning. However, this flexibility and learning occur against the backdrop of a comprehensive EU regulation of AI, which most comprehensively regulates AI systems in the high-risk class for which reducing uncertainty is especially important. Furthermore, the possibilities of experimenting within a regulatory sandbox are limited by the fact that the AI Act does not specify possibilities of exemptions from applicable legislation (Buocz et al., Reference Buocz, Pfotenhauer and Eisenberger2023) and that sandbox participants remain fully subject to liability (Truby et al., Reference Truby, Brown, Ibrahim and Parellada2022). In the EU regime of regulatory sandboxes for AI, these are thus mainly a way of testing how existing rules fare with new technological developments and innovations particularly in the domain of high-risk applications.

A different, second constellation exists where regulatory sandboxes are established in the absence of any binding regulation specifically for AI. The sandboxes then serve the purpose of producing evidence that can be used to address the lack of a legal framework for AI governance. They speed up and advance the process of understanding the impacts of the technology and possible rules for governing before governments and regulators take action to adopt binding rules. The role of learning is thus a different one compared to the first scenario described above. It is less constrained by existing regulation and can embrace greater openness and flexibility in the process of obtaining insights that can inform regulatory action. The need for learning is also arguably more acute given that regulation is lacking. One thus has to look beyond the EU to find examples of this second case. In the next section, we will thus examine several cases of regulatory sandboxes from different continents and assess them with regard to their role in engineering policy learning.

3. Case studies

3.1. Case selection

To shed light on how regulatory sandboxes for AI are designed to promote policy learning under conditions when AI regulation is lacking, we draw on case studies from the so-called majority world, i.e., outside the EU, North America, and Australia. Countries in that part of the world have lower technological capabilities and regulatory capacities. They also lack the market power of the EU that can compel foreign businesses to play by domestic rules. These conditions make them more sensitive to adopting regulation that could be overly comprehensive and strict, and thus stifle innovation and deter investments. Selecting relevant cases is not a straightforward task as regulatory sandboxes can be a form of window-dressing to signal innovativeness. Therefore, we have chosen to focus on examples from diverse countries that meet the following three criteria. First, the cases reflect established commitments or implemented regulatory sandbox as documented in official reports, public policies, or international organization publications. Second, they have been explicitly proposed as tools to foster the development of regulatory sandboxes. Third, they offer insights into the key motivations behind their creation and the objectives pursued by the authorities and organizations promoting them. Based on these considerations, we have selected the cases of regulatory sandboxes for AI shown in Table 1.

Table 1. Selected cases of regulatory sandboxes for AI

The regulatory sandboxes planned or implemented in Brazil, Colombia, Mauritius, Mexico, Rwanda, and Thailand fulfil the criteria outlined above. They are a selection of diverse cases from countries with different legal systems and innovation capabilities. In the analysis below, these cases illustrate to what extent learning has become a priority for authorities worldwide in the development of regulatory sandboxes. The analysis is based on a reading of the publicly accessible documentation of the selected regulatory sandboxes (for an overview of the sources used in the analysis, see Appendix A1). This material was analyzed with regard to the conceptualization and practical application of learning, specifically how the various sandboxes institutionalize the testing of technologies and regulatory rules and establish collaboration between stakeholders. Based on a careful reading of the available material, we identify larger themes and point to commonalities and differences of the examined cases. In this way, the analysis offers insights into the regulatory adaptability achieved through sandbox frameworks across different legal, institutional, and cultural contexts.

3.2. Analysis

Mutual learning as a core objective. One of the fundamental expectations tied to the examined regulatory sandboxes is mutual learning. While authorities aim to gain insights from these experimental frameworks, they do not position them solely as learning opportunities for regulators. Instead, selected participants are also expected to benefit from the process. In certain jurisdictions, such as Colombia and Mauritius, authorities explicitly recognize the need to deepen their understanding of emerging technologies by engaging directly with industry actors and observing their operational dynamics. In Thailand, the establishment of testing parameters is envisioned as a collaborative process, albeit guided by an initial proposal from the regulatory authority. In this way, rather than merely fostering innovation, these cases suggest that the authorities seek to understand the types of technologies being developed in their countries and generate insights that complement their AI regulatory initiatives.

Among the jurisdictions emphasizing learning, Mauritius stands out for its explicit adoption of a learning by doing approach. This strategy enables regulatory authorities to acquire a deeper understanding of advanced AI systems and their implications for supervisory responsibilities. By actively engaging with AI-driven technologies in real-world settings, regulators in Mauritius aim to enhance their capacity to oversee and shape the development of these systems effectively. Cases such as those of Colombia and Brazil furthermore clearly illustrate that learning is not a one-way process. The participants developing these innovations can also learn from regulators. In these instances, it becomes evident that some participants may lack awareness regarding the application of regulations in the development or deployment of these technologies. Regulatory authorities, therefore, seek to impart this knowledge and share insights to ensure compliance.

The case of Colombia in particular emphasizes the contribution of mutual learning to facilitate the responsible use of AI. The Colombian AI sandbox was initiated to demonstrate how privacy-by-design principles could enable novel AI solutions while adhering to the country’s data protection regulations. In this setting, the process of mutual learning is again a shared interest. Supervisory agencies can gain a deeper understanding of innovation, while technology developers can engage more effectively with regulators and benefit from their capacity for regulatory innovation. Two ways of innovation are being shared and learned.

This emphasis on mutual learning is very much in line with conclusions that Ringe and Ruof (Reference Ringe and Ruof2018) derived from their analysis of the EU regulatory framework governing so-called robo-advisors, automated financial advisory services operating without human intervention. The authors recommend the use of a ‘guided sandbox’ to foster mutual learning between firms and regulators, thereby also reducing regulatory uncertainty for participants. Overall, learning processes and particularly mutual learning are a central part in the design of the examined regulatory sandboxes. Furthermore, these sandboxes facilitate the learning process based on rules that allow at least for a certain degree of flexibility and experimenting within the setting of a regulatory sandbox.

Legal exemptions and flexibility provided to promote learning. Among the selected cases, it is particularly interesting to observe how regulatory authorities seek to establish mechanisms for regulatory flexibility and to gather valuable insights from these spaces. Across the different cases identified, there is a clear intent to provide legal exemptions that facilitate the ability of both entities and innovators to deploy technology and experiment with new compliance mechanisms. In this regard, three key types of mechanisms emerge: (i) flexible legal interpretations; (ii) regulatory flexibility granted through legal authorization; and (iii) legal flexibility conferred by authorities with the discretion to do so – with the latter corresponding the most to the EU’s approach to regulatory sandboxes, which does not specify exemption mechanisms and gives considerable legal flexibility to member state authorities in designing their sandboxes.

Notably, there is no single model for implementing mechanisms that introduce flexibility. Authorities often seek input from participants to determine the most suitable approach, as seen in cases such as Thailand and Mauritius. In contrast, countries like Colombia and Brazil have leveraged their ability to interpret existing legal frameworks to test different regulatory possibilities, thereby facilitating the process without requiring a pre-existing rule that explicitly authorizes such experimentation. In Mexico and Rwanda, in turn, the expectation is that the government will create dedicated spaces and define the norms or guidelines under which experimentation can take place. Undoubtedly, some authorities feel they have the necessary capabilities to manage the liabilities they may face from administering and overseeing a regulatory sandbox, while others believe they require greater legal support or prefer to delegate this responsibility to the participants.

The role of reporting mechanisms in knowledge transmission and standards setting. A crucial element in facilitating knowledge transfer within regulatory sandboxes is the systematic reporting of results and insights gathered throughout the process. In Thailand, the AI sandbox initiative places significant emphasis on structured reporting mechanisms, specifying the type of information that regulatory authorities expect from participants. Similarly, Brazil’s regulatory sandbox framework integrates continuous engagement through working groups, fostering a dynamic exchange of ideas between regulators and industry stakeholders. The overarching goal of these reporting mechanisms is to generate knowledge that extends beyond the immediate sandbox participants. Regulatory authorities seek to consolidate best practices, inform future policy developments, and refine regulatory frameworks. For instance, in Rwanda, authorities anticipate that lessons derived from the sandbox experience will directly contribute to the smoother implementation of AI governance mechanisms.

Colombia’s regulatory sandbox has played a pivotal role in advancing the country’s data protection landscape. Notably, it facilitated the issuance of Colombia’s first official guidelines on data protection and AI (External Circular 002, 2024), reinforcing the principle of privacy by design. This regulatory initiative mandates enhanced anonymization techniques and requires detailed documentation of AI system design processes. By embedding these safeguards within the sandbox framework, Colombia’s approach underscores the potential of regulatory sandboxes as instruments for policy innovation, aligning with the country’s statutory framework while fostering responsible AI governance.

Collaboration in effective learning, exit reports, and the requirement of trust. Regulatory sandboxes in jurisdictions such as Mexico and Rwanda prioritize public-private collaboration, fostering new modes of engagement between regulators and industry participants. This approach aligns with contemporary regulatory scholarship, which emphasizes that modern regulatory frameworks increasingly depend on reputation, information-sharing, and sustained interaction with market actors (Fahy, Reference Fahy2022; Ranchordás & Vinci, Reference Ranchordás and Vinci2024). Some regulatory sandboxes, such as the one proposed in Mauritius, explicitly articulate their goal as creating ‘an ideal space to encourage the development and testing of innovative solutions or other systems and collaboration between the public and private sector’ (MPSAR, 2021).

This type of collaboration is also evident in the case of Mexico. The British Embassy in Mexico, in partnership with the Mexican Science and Innovation Network, organized a series of roundtable discussions to share experiences and lessons learned on AI. These discussions aimed to provide an updated overview of the state of AI in Mexico and to foster dialogue among legislators, regulators, and key stakeholders from both countries (la Peña Sissi et al., Reference la Peña Sissi, Ernesto and Cristina2024). This effort led to the development of a regulatory sandbox report, which forms part of a project funded by the Technology Collaboration and Standards Fund of the United Kingdom’s Foreign, Commonwealth & Development Office (FCDO). Through promoting technology standards and international collaboration, particularly in developing countries, this UK fund supports initiatives that leverage technology for development, including projects focused on AI and cybersecurity (UK Government, 2023). FCDO aims to safeguard UK and partner interests in critical technologies, ensure supply chain resilience, address emerging threats, and help position the UK as a global science and technology superpower (Foreign, Commonwealth & Development Office (FCDO), 2025). The effectiveness of these regulatory experiments, therefore, hinges on fostering trust among stakeholders – ensuring that industry actors view regulatory engagement not merely as a compliance obligation but as an opportunity for co-regulatory learning and substantive collaboration.

A critical factor in the success of regulatory sandboxes is access to information, as their effectiveness depends on the availability of relevant data within each experimental setting. Recognizing this requirement, regulatory authorities have established frameworks to collect and present information systematically, often requiring participating entities to submit data that contributes to broader regulatory learning. This necessity is underscored by Brazil’s data protection authority, which has emphasized the importance of fostering algorithmic transparency. Brazil’s General Data Protection Law further reinforces the principle of transparency, mandating that AI systems’ inner workings – including algorithms and data processing techniques – be made visible and explainable.

At the same time, the Brazilian authority acknowledges significant constraints, particularly those related to the protection of intellectual property and trade secrets, which often prevent entities from disclosing critical information. In response, regulatory sandboxes aim to establish mechanisms that facilitate data access while balancing legal and proprietary concerns (ANPD, 2023). Yet, meaningful information exchange within these frameworks ultimately depends on a foundation of trust and mutually agreed-upon principles. Without such trust, the most valuable insights may remain inaccessible, thereby significantly limiting the impact of these experimental environments.

Across multiple jurisdictions, regulatory sandboxes increasingly serve as a mechanism for acquiring AI-related knowledge. In the cases examined, particularly in their initial phases, the sandbox design appears to prioritize this learning function. Only once authorities gain meaningful insights do they consider expanding regulatory experimentation. Furthermore, there is a growing emphasis on producing comprehensive reports that systematically document the learning experiences of regulatory sandboxes. Nearly all of the identified sandboxes emphasize the development of a final exit report, white paper or feedback report to document the information-sharing process and key findings. The reports proposed by authorities in some of these cases serve a dual function: they do not only provide critical insights for regulatory bodies but also contribute to broader policy discussions on AI governance, standard-setting, and supervisory mechanisms. Importantly, these reports extend beyond internal regulatory use, offering valuable resources for external stakeholders – particularly when sandboxes are embedded within national AI or innovation strategies.

However, despite the growing emphasis on documentation and transparency, no final exit reports from these sandboxes have been identified. This may be due to their early stage of implementation or delays in report preparation. Nevertheless, there is evidence of regulatory guidance and decisions emerging from these environments. For instance, in Colombia and Mauritius, authorities have issued guidelines on the responsible use of AI and its adoption in strategic sectors, such as Mauritius’ Chatbot for the Human Resource Management Manual (Circular Letter 20 of 2021). This indicates that regulatory sandboxes are producing tangible outcomes, even if these insights are not always formally documented in comprehensive reports. Constraints related to time and resources likely pose challenges to systematically recording all learnings through official reports and related initiatives.

In sum, based on the analysis of the cases described in Table 1, we have identified elements that highlight learning as the focus of these proposals, rather than the participation of regulators in an innovation process. The envisaged learning processes are also tightly linked to a close collaboration between regulators and stakeholders. The account of regulatory sandboxes described above thus overall suggests that many of the tools and methodologies proposed for the design of these sandboxes will gain from delving deeper into the regulatory learning process rather than focusing exclusively on how they can support innovation.

4. Conclusion

This article aims to support the development of a more robust line of inquiry into the learning processes that take place within regulatory sandboxes and into what regulators themselves learn. This means moving beyond the traditional view that sees these spaces merely as arenas for technology exploration and, first of all, a means for promoting innovation. We have argued above that in order for regulatory sandboxes to contribute to closing legal lags and to add value over other regulatory instruments, their capability to foster learning processes should be seen as the key criterion regarding the design and operation of sandboxes. Whether sandboxes advance the process of gaining insights that inform regulation on emerging technologies is also important in terms of the cost-effectiveness of this regulatory instrument. There are significant costs associated with regulatory sandboxes (Appaya & Jeník, Reference Appaya and Jeník2019) while there also exist alternatives, like innovation hubs, which require less effort. The scope of regulatory sandboxes may also be too small to truly promote innovation. Yet, they can still institutionalize focused learning processes that can very much inform regulatory action – which could easily merit the effort of a sandbox.

However, to realize such learning, regulatory sandboxes must be designed to enable the testing of both technologies and regulatory rules, and they must facilitate effective collaboration that serves to reduce information asymmetries for regulators. Importantly, the faster learning that regulatory sandboxes realize can complement the learning within the traditional policy cycle that takes place over the long term. Taking AI regulation as a particularly important regulatory domain in this regard, we have argued that learning through sandboxes is especially relevant where no comprehensive AI regulation exists yet and where policymakers and regulators must be especially careful in drafting rules that will influence the development of and investment in AI and the overall competitiveness of their countries. Starting from these considerations, we have examined six AI sandbox initiatives from majority world countries: Brazil, Colombia, Mauritius, Mexico, Rwanda, and Thailand. Through this research, we also seek to give greater visibility to the regulatory sandboxes on AI emerging in the majority world. Studying these cases can complement knowledge about the EU efforts to establish regulatory sandboxes, which take place against a very different backdrop, with an existing comprehensive regulatory framework for AI already in place.

The analysis of the documentation of planned and implemented regulatory sandboxes for AI in the six selected cases points to the centrality of these sandboxes for establishing learning about both technology and the adequacy of policy frameworks. Major themes emerging from these sandboxes are a focus on mutual learning based on close collaboration, flexibility through legal exemptions and other means, reporting mechanisms for advancing knowledge, and effective collaboration, for which trust is arguably of fundamental importance. The examined cases overall point to the importance of creating methodologies for reliably producing policy-relevant insights through regulatory sandboxes, i.e., through fostering collaborative experiential learning, establishing mechanisms of information transmission, and producing reports that facilitate knowledge dissemination. The importance of international collaboration is clearly reflected in initiatives like Mexico’s regulatory sandbox, which has significantly benefited from the support of the United Kingdom’s technology cooperation resources. Similarly, regional efforts such as the EU–LAC Digital Alliance illustrate broader partnerships in this area. As part of this alliance, stakeholders have identified potential avenues for collaboration between the European Union and Latin America and the Caribbean in the field of artificial intelligence. In terms of regulation, this includes providing technical assistance, exchanging best practices in the implementation of AI sandboxes, and leveraging lessons learned to foster innovative regulatory approaches (Economic Commission for Latin America and the Caribbean (ECLAC), 2024).

To orient collaboration between stakeholders toward reducing information asymmetries, regulatory sandboxes need to create suitable incentives for participants and an environment that fosters trust – while avoiding that private actors exert disproportionate influence on regulatory frameworks. The design of regulatory sandboxes must also enable regulators to gain an understanding not only of the technology, but also the suitability of different regulatory rules. Overall, the design of sandboxes should focus on deepening the learning experiences, ensuring access to critical information, and even creating incentives that balance transparency with confidentiality protection.

It should be noted that the examined regulatory sandboxes may not yet function as true experimental settings – perhaps rather as experiential learning initiatives. Regulatory experimentation involves testing different regulatory measures and comparing outcomes. At present, the AI sandbox experiences we have examined do not demonstrate the integration of such comparative methodologies. While sandboxes can undoubtedly be valuable spaces, regulatory authorities may not yet perceive experimentation as their primary need, and it remains unclear whether such an approach is feasible or desirable within current sandbox designs.

Finally, the analysis above also contributes to the broader agenda of regulatory sandbox harmonization. This process has already begun at the national level in some countries in which sandboxes exist across various sectors such as health, finance, telecommunications, and data protection, among others. In countries like the United Kingdom, efforts are underway to create more harmonized approaches by developing experimentation projects focused on a single product that involve multiple authorities (Digital Regulation Cooperation Forum (DRCF), 2024). Other countries, such as Germany, are considering the adoption of general sandbox legislation (Reallabore-Gesetz) that provides a common framework for sandbox design and participation (National Academy of Science and Engineering, 2023). Colombia, in turn, is proposing the establishment of a national sandbox committee to coordinate and align such initiatives in different sectors (Guío, Reference Guío2022).

The question now is whether harmonization efforts also evolve into an international initiative to develop common standards for regulatory sandboxes. A notable step in this direction is the Council of Europe Treaty No. 225 (Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law), which explicitly encourages signatories to develop ‘controlled environments’ for AI experimentation in Article 13 (Safe Innovation) (Council of Europe, 2024). Efforts toward alignment are also emerging in sectors with extensive experience in regulatory sandboxes, such as finance (Financial Conduct Authority (FCA), 2023) and telecommunications (Faye, Reference Faye2023). In this latter area, the International Telecommunication Union (ITU) has explored the possibility of a global telecommunications sandbox, based on the precedent sandboxes set by ICT regulators in Colombia, Mexico, France, Thailand, and Saudi Arabia, among others.

A set of minimal standards for regulatory sandboxes could help to integrate results of regulatory learning – including about the suitability of the design of the sandboxes themselves. An important question is, however, how far such standards can go. Despite the use of similar terminology, regulatory sandboxes in majority world countries primarily serve as mechanisms for increasing institutional knowledge and understanding AI technologies and their implications. While some jurisdictions such as the European Union and the United States may share these objectives, their sandbox initiatives are often more explicitly aimed at accelerating innovation while having to ensure compliance with very different extant regulations. Efforts to harmonize and coordinate regulatory sandboxes must, therefore, also take important differences into account, which may mean a necessarily fragmented and regionally distinct nature of AI regulation. In any case, the path toward harmonization requires a broader base of national experiences and the development of stronger capacities for regulatory experimentation in AI governance. Further progress toward the actual implementation of AI regulatory sandboxes is needed as many of the announced initiatives have yet to be operationalized, including in some European countries. Advancing sandbox implementation efforts is essential to gaining a clearer understanding of what these experimental spaces entail in practice and where meaningful points of convergence may exist. Ultimately, regulatory sandboxes tend to reflect the unique needs and conditions of the contexts in which they are developed.

Acknowledgements

We thank the two anonymous reviewers for their thoughtful comments on the manuscript.

Funding statement

There is funding to declare.

Competing interests

The authors declare that there is no conflict of interests.

Appendix A1. Material used for the analysis

Armando Guio Español is the Executive Director of the Global Network of Internet & Society Centers and is an affiliated researcher at the Berkman Klein Center at Harvard University and the TUM School of Governance, Munich. He led Colombia’s AI Strategy and has advised governments, international organizations, and institutions such as the OECD, the World Bank, and UNESCO.

Pascal D. Koenig is Assistant Professor for Artificial Intelligence and Governance at Vrije Universiteit Amsterdam. His research deals with the consequences of digital technologies, particularly the use of data and artificial intelligence, for democratic governance. Recent work has appeared in Big Data & Society, Government Information Quarterly, Public Administration, and Public Management Review.

References

Alaassar, A., Mention, A.-L., & Aas, T. H. (2020). Exploring how social interactions influence regulators and innovators: The case of regulatory sandboxes. Technological Forecasting and Social Change, 160, 116.10.1016/j.techfore.2020.120257CrossRefGoogle Scholar
Allen, H. (2019). Regulatory sandboxes. The George Washington Law Review, 87(3), 579645.Google Scholar
Almada, M., & Petit, N. (2025). The EU AI Act: Between the rock of product safety and the hard place of fundamental rights. Common Market Law Review, 62(1), 85120.10.54648/COLA2025004CrossRefGoogle Scholar
Appaya, S., & Jeník, I. (2019). Running a Sandbox May Cost Over $1M, Survey Shows Washington D.C: CGAP. https://www.cgap.org/blog/running-sandbox-may-cost-over-1m-survey-shows. Accessed 21 June 2025.Google Scholar
Armstrong, H., Gorst, C., & Rae, J. (2019). Renewing regulation ‘Anticipatory regulation’ in an age of disruption. London: Nesta.Google Scholar
Beckstedde, E., Correa Ramírez, M., Cossent, R., Vanschoenwinkel, J., & Meeus, L. (2023). Regulatory sandboxes: Do they speed up innovation in energy? Energy Policy, 180, 113.10.1016/j.enpol.2023.113656CrossRefGoogle Scholar
Bergmann, M., Schäpke, N., Marg, O., Stelzer, F., Lang, D. J., Bossert, M., Gantert, M., Häußler, E., Marquardt, E., Piontek, F. M., Potthast, T., Rhodius, R., Rudolph, M., Ruddat, M., Seebacher, A., and Sußmann, N. (2021). Transdisciplinary sustainability research in real-world labs: Success factors and methods for change. Sustainability Science, 16(2), 541564.10.1007/s11625-020-00886-8CrossRefGoogle Scholar
Bijkerk, W. (2021). Regulatory sandboxes, innovation hubs, and other regulatory innovation tools in Latin America and the Caribbean. Washington D.C.: Inter-American Development Bank.10.18235/0003196CrossRefGoogle Scholar
BMWi. (2019). Freiräume für Innovationen. Das handbuch für Reallabore. Berlin: Bundesministerium für Wirtschaft und Energie.Google Scholar
Buocz, T., Pfotenhauer, S., & Eisenberger, I. (2023). Regulatory sandboxes in the AI Act: Reconciling innovation and safety? Law, Innovation and Technology, 15(2), 357389.10.1080/17579961.2023.2245678CrossRefGoogle Scholar
Busuioc, M., Curtin, D., & Almada, M. (2023). Reclaiming transparency: Contesting the logics of secrecy within the AI Act. European Law Open, 2(1), 79105.10.1017/elo.2022.47CrossRefGoogle Scholar
Chen, C. (2020). Rethinking the regulatory sandbox for financial innovation: An assessment of the UK and Singapore. In Fenwick, M., Vyn Uytsel, S. & Ying, B. (Eds.), Regulating FinTech in Asia. Global contexts, local perspectives (pp. 1130). Singapore: Springer.10.1007/978-981-15-5819-1_2CrossRefGoogle Scholar
Cornelli, G., Doerr, S., Gambacorta, L., & Merrouce, O. (2024). Regulatory Sandboxes and Fintech Funding: Evidence from the UK. Review of Finance, 8(1), 203233.10.1093/rof/rfad017CrossRefGoogle Scholar
Council of Europe. (2024). Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Vilnius: Council of Europe. https://rm.coe.int/1680afae3c. Accessed 21 June 2025.Google Scholar
Digital Regulation Cooperation Forum (DRCF). (2024). AI and Digital Hub. London: DRCF. https://www.drcf.org.uk/ai-and-digital-hub. Accessed 21 June 2025.Google Scholar
Dunlop, C. A., & Radaelli, C. M. (2016). Teaching Regulatory Humility: Experimenting with Student Practitioners. Politics, 36(1), 7994.10.1111/1467-9256.12075CrossRefGoogle Scholar
Economic Commission for Latin America and the Caribbean (ECLAC). (2024). Artificial intelligence in Latin America and the Caribbean: Navigating strategic challenges and opportunities – session two: digital alliance EU-LAC, building a digital future together. Santiago de Chile: ECLAC. https://www.cepal.org/sites/default/files/events/files/ia_session_da_prep_v13.pdf. Accessed 21 June 2025.Google Scholar
Electronic Transactions Development Agency (ETDA). (2023). Announcement of the NBTC regarding the Artificial Intelligence Innovation Testing Center (AI Sandbox). Bangkok: Electronic Transactions Development Agency. https://www.law.go.th/listeningDetail?survey_id=MTc1NERHQV9MQVdfRlJPTlRFTkQ=. Accessed 21 June 2025.Google Scholar
Fahy, L. A. (2022). Fostering regulator–innovator collaboration at the frontline: A case study of the UK’s regulatory sandbox for fintech. Law & Policy, 44(2), 162184.10.1111/lapo.12184CrossRefGoogle Scholar
Faye, T. (2023). A case for ICT regulatory sandbox. Geneva: International Telecommunication Union (ITU), World Bank. https://digitalregulation.org/a-case-for-ict-regulatory-sandbox/. Accessed 21 June 2025.Google Scholar
BMWK. (2024). Guide: New flexibility for innovation – Real-world laboratories in the regulatory framework. Berlin: Federal Ministry for Economic Affairs and Climate Action (BMWK).Google Scholar
Financial Conduct Authority (FCA). (2023). London: FCA. https://www.fca.org.uk/firms/innovation/global-financial-innovation-network. Accessed 21 June 2025.Google Scholar
Foreign, Commonwealth & Development Office (FCDO). (2025). London: UK Government. https://www.gov.uk/government/organisations/foreign-commonwealth-development-office/about/research. Accessed 21 June 2025.Google Scholar
Gasser, U., & Almeida, V. (2017). A Layered Model for AI Governance. IEEE Internet Computing, 21(6), 5862.10.1109/MIC.2017.4180835CrossRefGoogle Scholar
Goo, J. J., & Heo, J.-Y. (2020). The impact of the regulatory sandbox on the fintech industry, with a discussion on the relation between regulatory sandboxes and open innovation. Journal of Open Innovation, 6(43), 118.10.3390/joitmc6020043CrossRefGoogle Scholar
Guio, A. (2024). Regulatory sandboxes in developing economies. An innovative governance approach, Santiago de Chile; Eschborn: ECLAC; GIZ.Google Scholar
Guío, A. (2022). Hoja de Ruta: Comité de Sandboxes Regulatorios y Mecanismos Exploratorios. Bogotá: Corporación Andina de Fomento (CAF).Google Scholar
Guston, D. H. (2014). Understanding ‘anticipatory governance’. Social Studies of Science, 44(2), 218242.10.1177/0306312713508669CrossRefGoogle Scholar
Haeffele, S., and Hobson, A. (Eds.). (2019). Introduction. In The need for humility in policymaking: Lessons from regulatory policy (pp. xixvii). London: Rowman & Littlefield International.Google Scholar
Hellmann, T. F., Montag, A., & Vulkan, N. (2024). The Impact of the Regulatory Sandbox on the FinTech Industry. Social Science Research Network Electronic Journal. doi:10.2139/ssrn.4187295Google Scholar
Howlett, M., Capano, G., & Ramesh, M. (2018). Designing for robustness: surprise, agility and improvisation in policy design. Policy and Society, 37(4), 405421.10.1080/14494035.2018.1504488CrossRefGoogle Scholar
Johnson, W. G. (2022). Caught in quicksand? Compliance and legitimacy challenges in using regulatory sandboxes to manage emerging technologies. Regulation & Governance, 17(3), 709725.10.1111/rego.12487CrossRefGoogle Scholar
la Peña Sissi, D., Ernesto, I., & Cristina, S. (2024) Panorama de la IA en México: Hacia la gobernanza de la IA y la relevancia del Sandbox de IA. Mexico City: Academia Mexicana de Ciberseguridad y Derecho Digital. https://www.amcid.org/page/sandboxregulatoriomexico. Accessed 21 June 2025.Google Scholar
Lumsa. (2025). Spain: Sandbox IA.Rome: LUMSA Università. https://betteregulation.lumsa.it/spain-sandbox-ia. Accessed 21 June 2025.Google Scholar
Macrae, C., & Ansell, C. K. (2024). Generative Spaces: Collaboration, Learning and Innovation in a Regulatory Sandbox. SSRN Electronic Journal. doi:10.2139/ssrn.4825907. Accessed 21 June 2025.CrossRefGoogle Scholar
Marjosola, H. (2021). The problem of regulatory arbitrage: A transaction cost economics perspective. Regulation & Governance, 15(2), 388407.10.1111/rego.12287CrossRefGoogle Scholar
Markellos, R. N., Ennis, S. F., Enstone, B., Manos, A., Pazaitis, D., & Psychoyios, D. (2024). Worldwide Adoption of Regulatory Sandboxes: Drivers, Constraints and Policies. Social Science Research Network Electronic Journal. doi:10.2139/ssrn.4764911Google Scholar
Ministry of ICT and Innovation. (2023). The National AI Policy: To leverage AI to power economic growth, improve quality of life and position Rwanda as a global innovator for responsible and inclusive AI. Rwanda: Ministry of ICT and Innovation.Google Scholar
Molinari, F., Rachinger, S., Kordel, L., Hennen, L., & Van Roy, V. (2022). Exploratory sandboxes: A new approach to test and learn in policy-making. Luxembourg: Publications Office of the European Union.Google Scholar
Nabil, R. (2024). Artificial intelligence regulatory sandboxes. Journal of Law, Economics & Policy, 19(2), 295348.Google Scholar
National Academy of Science and Engineering. (2023). Reallabore-Gesetz (legislation for regulatory sandboxes): New opportunities for international competitiveness and faster innovations. Munich: acatech.Google Scholar
Neuman, W. R. (2023). Congress can’t regulate AI. It’s too complex. Boston: The Boston Globe. https://www.bostonglobe.com/2023/09/15/opinion/congress-artificial-intelligence-regulation/. Accessed 21 June 2025.Google Scholar
OECD. (2023). Regulatory sandboxes in Artificial Intelligence. Paris: Organization of Economic Cooperation and Development.Google Scholar
Otter, K. (2024). Experimentierklauseln, Reallabore und Verhältnismäßigkeit. Die Öffentliche Verwaltung, 8, 309316.Google Scholar
Parenti, R. (2020). Regulatory Sandboxes and Innovation Hubs for FinTech. Strasbourg: European Parliament.Google Scholar
Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5), 116.10.1016/j.patter.2024.100988CrossRefGoogle Scholar
Peirce, H. (2018). Beaches and Bitcoin: Remarks before the Medici Conference. Washington D.C.: U.S. Securities and Exchange Commission. https://www.sec.gov/newsroom/speeches-statements/speech-peirce-050218. Accessed 21 June 2025.Google Scholar
MPSAR. (2021). Sandbox Framework for Adoption of Innovative Technologies in the Public Sector. Port-Louis: Ministry of Public Service and Administrative Reforms. https://civilservice.govmu.org/Documents/Circulars%202021/Booklet%20Sandbox%20framework.pdf. Accessed 21 June 2025.Google Scholar
Ranchordás, S. (2021a). Experimental regulations for AI: Sandboxes for morals and mores. Morals + Machines, 1, 86100.10.5771/2747-5182-2021-1-86CrossRefGoogle Scholar
Ranchordás, S. (2021b). Experimental regulations and regulatory sandboxes – law without order? Law and Methods, 12, 123.Google Scholar
Ranchordás, S., & Vinci, V. (2024). Regulatory sandboxes and innovation-friendly regulation: Between collaboration and capture. Social Science Research Network Electronic Journal. doi:10.2139/ssrn.4696442Google Scholar
Ringe, W.-G., & Ruof, C., (2018) A regulatory sandbox for robo advice https://ssrn.com/abstract=3188828. Accessed 21 June 2025.10.2139/ssrn.3188828CrossRefGoogle Scholar
Ringe, W.-G., & Ruof, C. (2020). Regulating Fintech in the EU: The Case for a Guided Sandbox. European Journal of Risk Regulation, 11(3), 604629.10.1017/err.2020.8CrossRefGoogle Scholar
Ruschemeier, H. (2023). AI as a challenge for legal regulation – The scope of application of the artificial intelligence act proposal. ERA Forum, 23(3), 361376.10.1007/s12027-022-00725-6CrossRefGoogle Scholar
Ruschemeier, H. (2025). Thinking outside the box? Regulatory sandboxes as a tool for AI regulation. In Steffen, B. (Ed.), Bridging the Gap Between AI and Reality (pp. 318332). Cham: Springer Nature Switzerland.10.1007/978-3-031-73741-1_20CrossRefGoogle Scholar
SEDIA Sandbox IA. (2025). Madrid: SEDIA-Secretaría de Estado de Digitalización e Inteligencia Artificial. https://avance.digital.gob.es/sandbox-IA/Paginas/preguntas-frecuentes-Sandbox-IA.aspx. Accessed 21 June 2025.Google Scholar
SIC. (2021). Sandbox on privacy by design and by default in Artificial Intelligence projects. Bogotá: SIC Colombia Publications. https://www.sic.gov.co/sites/default/files/files/2021/150421%20Sandbox%20on%20privacy%20by%20design%20and%20by%20default%20in%20AI%20projects.pdf. Accessed 21 June 2025.Google Scholar
Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 15681580.10.1016/j.respol.2013.05.008CrossRefGoogle Scholar
Sunstein, C. R. (2022). ‘We Test’: An Imagined Regulatory Future. Harvard Public Law Working Paper No. 22-19. doi:10.2139/ssrn.4112291CrossRefGoogle Scholar
Taeihagh, A. (2025). Governance of Generative AI. Policy and Society, Online First, 122. doi:10.1093/polsoc/puaf001Google Scholar
Tan, S. Y., & Taeihagh, A. (2021). Adaptive governance of autonomous vehicles: Accelerating the adoption of disruptive technologies in Singapore. Government Information Quarterly, 38(2), 115.10.1016/j.giq.2020.101546CrossRefGoogle Scholar
Truby, J., Brown, R. D., Ibrahim, I. A., & Parellada, O. C. (2022). A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications. European Journal of Risk Regulation, 13(2), 270294. doi:10.1017/err.2021.52CrossRefGoogle Scholar
UK Government. (2023). The UK’s International Technology Strategy. London: HM Government. https://www.gov.uk/government/publications/uk-international-technology-strategy/the-uks-international-technology-strategy. Accessed 21 June 2025.Google Scholar
Washington, P. B., Rehman, S. U., & Lee, E. (2022). Nexus between Regulatory Sandbox and Performance of Digital Banks—A Study on UK Digital Banks. Journal of Risk and Financial Management, 15(12), 610627.10.3390/jrfm15120610CrossRefGoogle Scholar
World Bank (2020). Global Experiences from Regulatory Sandboxes. Washington D.C.: World Bank.Google Scholar
Wörsdörfer, M. (2024). Biden’s Executive Order on AI and the E.U.’s AI Act: A Comparative Computer-Ethical Analysis. Philosophy and Technology, 37(3), 127.10.1007/s13347-024-00765-5CrossRefGoogle Scholar
Figure 0

Table 1. Selected cases of regulatory sandboxes for AI