This special issue is the final publication of my project “Trial and Error: Experimental Regulation of Digital Technologies within the multi-layered regulatory system”, funded by the Daimler and Benz Stiftung. I cannot imagine a better forum than the Cambridge Forum on AI: Law and Governance, which gave me the opportunity to include fresh, interdisciplinary, and innovative perspectives on this topic. The aim of this issue is to provide a first overview and collection of different perspectives on the emerging field of regulatory sandboxes in the context of AI regulation, including international case studies as well as the new provisions under the EU AI Act.Footnote 1 This special issue combines research articles, international case studies, and policy papers.
1. Prelude: experimental regulation and digital technologies
The regulation of digital technologies, especially “AI,”Footnote 2 has been one of the most debated topics in law in recent years. The dynamics of digital transformation, especially the development of AI, seem to be at odds with the inherently reactive concept of legal regulation, which can often only respond to current developments after the fact. Digital technology is developing at a rapid pace, but law is slow, or slower, because democratic lawmaking requires compromises and constitutional procedures. The latter is a good thing and should not be changed or abandoned, but implementing an approach that strengthens anticipation of future developments can only enhance democratic lawmaking and execution.
In addition to the characteristic of irreversibility, certain digital technologies are characterised by a qualitative leap with no analogous equivalent in the physical world. This includes the quantitative data analysis of algorithmic decision-making systems: without the appropriate digital tools, the amount of information would simply be unmanageable for humans. Forecasting decisions are not foreign to the law; the precautionary principle, which has its origins in environmental law, is a legal instrument for responding to future developments that cannot be anticipated in detail.
In the field of digital and AI regulation, the regulatory knowledge gap is widening – authorities and legislators cannot predict the future, but laws must still be able to respond to future developments – due to uncertainties about the impact of a technology characterised by considerable complexity and opacity. The relevant questions are no longer just about the regulation of technology, but about the technology of regulation. Finding the right path is complex: while abandoning regulation by law is not an option under the rule of law, changing entire codifications in anticipatory obedience is rarely fruitful. A more flexible but effective, experimental regulation is needed (Ruschemeier, Reference Ruschemeier and Steffen2025).
Regulatory sandboxes are not a new research topic, neither in academia in general nor from the perspective of law (Alaassar, Mention & Aas, Reference Alaassar, Mention and Aas2020; Allen, Reference Allen2019; Buckley, Arner, Veidt & Zetzsche, Reference Buckley, Arner, Veidt and Zetzsche2020; Buocz, Pfotenhauer & Eisenberger, Reference Buocz, Pfotenhauer and Eisenberger2023; Chen, Reference Chen and Fenwick et al.2020; Everhart, Reference Everhart2020; Gerlach & Rugilo, Reference Gerlach and Rugilo2019; Goo & Heo, Reference Goo and Heo2020; Hapsari, Maroni, Satria & Ariyani, Reference Hapsari, Maroni, Satria and Ariyani2019; Heldeweg, Reference Heldeweg2015; Jenik & Lauer, Reference Jenik and Lauer2017; Makarov & Davydova, Reference Makarov, Davydova, Popkova and Sergi2021; Nabil, Reference Nabil2024; Ranchordas, Reference Ranchordas2021b, Reference Ranchordas and van Klink2021a; Ruschemeier, Reference Ruschemeier and Steffen2025). Sandboxes in legal-regulatory contexts are a tool of experimental regulation, a concept that promises more flexibility and adaptiveness in a world that is daily becoming more complex and thereby challenging to the regulatory approach of law that relies on a defined, regulatory object, identifiable addressees, and established criteria of prognosis. Experimental regulation as an umbrella term combines empirical findings with legal requirements in a targeted manner to promote innovation on the one hand and generate expert knowledge on new technologies on the other. Experimentation itself is deeply rooted in science, not only in fields that rely on scientific experiments to identify and prove the facts, but also in the area of law and policy.Footnote 3
There are three starting points to approaching experimental regulation from a legal point of view:
First, the recognition that regulatory authorities and legislative entities structurally lack knowledge about the subject of regulation and its future development when it comes to emerging technologies. This is not a problem per se, different areas of law operate on prognosis, such as environmental law, security law, and effectively every area of law that aims at prevention. The precautionary principle is deeply rooted in European and many other national laws. That is why experimental regulation is particularly valuable in a rule-of-law sense when it not only promotes supposed innovations, but also refers at least equally to regulatory learning. Ideally, experimental regulation can lead to better knowledge, data, and thus a better basis for decision-making on the part of both the executive and legislative branches regarding the object of regulation, the actors involved, and any regulatory gaps or needs for adjustment.
Second, the idea of trial and error is nevertheless not something that the law usually envisages. In constitutional democracies, the rule of law, legal certainty, and clarity are important constitutional requirements. Legislators must weigh their regulatory requirements proportionately, and those subject to regulation must be able to foresee what the law requires of them, with no arbitrary exceptions to applicable law. The principle of equal treatment prohibits the unilateral privileging of certain actors.
Third, experimental methods are extra-juridical in nature: starting with the term “sandbox.” Stemming from computer science, regulatory sandboxes are meant to create empirical knowledge, which is perceived differently in various areas of law. Therefore, the successful implementation of regulatory sandboxes requires at least interdisciplinary openness, if not an interdisciplinary approach to the issue. In AI regulation, particularly in the AI Act, it is evident that this approach is lacking in many areas, as the focus has been solely on technical or product safety aspects (Ruschemeier & Bareis, Reference Ruschemeier and Bareis2025).
2. The new legal landscape: regulatory sandboxes under the AI Act
This project was driven by a new perspective — the aim of using the sandbox concept as a part of experimental regulation in AI governance. The AI Act has made regulatory sandboxes for AI systems mandatory on the Member State level, opening a whole new field of comparative research opportunities. This special issue wants to function as a starting point by combining current state-of-the-art analyses of regulatory sandboxes under the new regulatory regime of the AI Act, building the field for the coming empirical and legal developments that will evolve once the provisions come into force.
With the provisions of Articles 57 et seq. of the AI Act, the regulator seeks to strike a balance between responsibility for innovation and openness to innovation. The legal provisions on regulatory sandboxes can be found in the AI Act in Chapter 6 on measures to promote innovation. Further explicit objectives are set out in Article 57(9): improving legal certainty, promoting the exchange of best practices, innovation, competition, and the development of an AI ecosystem, contributing to evidence-based regulatory learning, and facilitating and accelerating the access of AI systems to the Union market. During the legislative process, the focus seemed to shift towards speeding up market access and fostering the cloudy goal of innovation: the AI Act now establishes the opportunity to test high-risk AI systems outside of regulatory sandboxes under real-world conditions, Article 60.
Only time will tell whether this will result in a decline in participation in regulatory sandboxes, which require cooperation with the supervisory authority. A considerable number of issues are yet to be addressed in the AI Act and will require subsequent clarification through the Commission’s delegated acts.
It is imperative to acknowledge the fundamental premise that the financial implications of regulatory sandboxes, including administrative and resource costs, are solely justified when the objective of effective regulation is accorded the requisite seriousness, with the overarching goal of promoting the common good within the framework of the rule of law and democratic principles. It is evident that not all proposed advancements in AI contribute to these objectives in an equivalent manner. Consequently, the balance of interests in the area of sandboxes should not be exclusively oriented towards accelerating market entry; rather, the opportunity should be seized to adequately assess risks to fundamental rights and collective interests, as well as the operationalisation of regulatory learning. It is therefore vital that fundamental rights (including data protection) and consumer protection are placed at the centre of focus and not casually dispensed with in the context of sandboxes. In terms of market dynamics, the use of sandboxes by major technology companies should be prohibited and constrained for other well-established entities that possess significant resources. If executed effectively, regulatory sandboxes have the potential to facilitate the anticipated market mechanisms that have been impeded by the pre-eminence of a small number of global players. From a legal perspective, regulatory sandboxes represent a fascinating example of the balance between legal certainty and innovation, legal compliance and exceptions, and the relationship between regulated entities and the competent authorities. It is imperative to acknowledge that all these issues necessitate constant recalibration.
3. Making regulatory sandboxes work
Regulatory sandboxes must be attractive for participants to attain the goals that the AI Act envisages. Trust, cooperation, and goodwill are required from both sides— regulators participants— factors that cannot be enforced by law. While the Commission has yet to issue standards and guidelines, the biggest challenge for Member States is to get sandboxes up and running by August 2026.
Nathan Genicot and Thiago Guimaraes Moraes focus on the AI Act’s requirements for regulatory sandboxes, the interaction with other experimentation tools, and the difference between sandboxes and real-life testing. Their analysis found that sandboxes as an instrument have become popular within the European legal landscape, mentioned by different regulations, but the AI Act establishes the most ambitious sandbox framework by establishing them as obligatory in all Member States. Moreover, the authors argue that sandboxes and testing under real-world conditions are two different instruments under the AI Act. After explaining the different concepts and implementations of regulatory sandboxes, e.g., with or without the removal of legal constraints, the competent authorities, and the relationship to conformity assessments, the article analyses the differences between sandboxes and real-life testing in detail. As testing under real-world conditions applies to systems that have yet to be placed on the market, the provisions offer a new form of regulatory flexibility, given they are only subject to the provisions for prohibited AI systems under Article 5 AI Act. The requirements for sandboxes and testing outside sandboxes under real-world conditions partially overlap. It thus remains unclear whether the provisions apply only to high-risk AI systems, a question that requires adept navigation of the risks of regulatory capture and requirements for transparency. Furthermore, the authors see that testing under real-world conditions does not foster regulatory learning; instead, it is used more as a last step before deploying systems on the market. The authors argue convincingly that obtaining informed consent and the involvement of data protection authorities should become best practice standards for sandboxes and real-life testing settings whenever personal data is processed.
Deidre Ahern explores the pressing question of how to operationalise regulatory sandboxes under the AI Act with a focus on capacity, coordination, and attractiveness to providers. Ahern argues that a decentralised regulatory architecture for sandboxes needs EU leadership and poses hard questions regarding specific design choices. After summarising the development of sandboxes in the fintech sector, the discussions and controversies, Ahern points out the different approach to regulatory sandboxes under the AI Act and analyses the interplay of high-risk system provider obligations with regulatory sandboxes. The article highlights the complexity and challenges of setting up regulatory sandboxes for Member States, especially in terms of identifying the risks to fundamental rights, the relaxation of existing rules, and the hurdles for data use. Participants can receive potential compliance benefits from sandbox exit reports that must be taken into account by the market surveillance authorities, but these do not extend to a presumption of conformity. The AI Act sandbox model does, however, potentially offer the advantage of enhancing the relationship between regulators and participants (or potential participants) in the sandbox. Ahern’s analysis shows that the regulatory sandboxes will most likely contribute indirectly in the form of assistance with navigating regulatory interfaces. The author also predicts that regulatory sandboxes will be extremely resource-intensive for regulators due to the complexities of supervision. Additionally, uneven, varied approaches to regulatory sandboxes taken by different Member States could undermine the harmonisation goal sought by the AI Act and ultimately the rule of law. Trust and cooperation are strictly necessary for regulatory sandboxes to succeed. The article maps out which Member States already have established a sandbox (Spain) and those that have already made decisions, but most of them have not yet publicly announced any sandbox implementation plans. What is needed, writes Ahern, is the implementation of guidelines for sandboxes, appropriate coordination between different Member States, and guidance for innovators around competing supports.
Michael Kolain investigates the possibility of joint regulatory sandboxes in law enforcement. The contribution focuses on the conditions for real-world condition testing under the AI Act by discussing a case study from Germany. Kolain highlights how law enforcement’s desire for data-intensive technologies, especially AI, is in conflict with the prevention of surveillance and protection of fundamental rights. Regulatory sandboxes could be a tool to manage this tension by balancing these conflicting interests. This is particularly relevant in light of the fundamental-rights-oriented risk classification in Article 6 (2) in connection with Annex III AI Act, which encompasses different areas of law enforcement. The author discusses real-world testing in sandboxes also within the context of legal-political discussions around digital sovereignty and the dependence on bigger firms and AI systems from outside the European Union. Even if the main aim of regulatory sandboxes under the AI Act is not directed at testing AI systems for the area of law enforcement specifically, the author argues that sandboxes could bring law enforcement authorities and European SMEs together to facilitate regulatory learning, and, even further, to inform the drafting of legislative proposals. The article develops the idea of a joint regulatory sandbox in the field of law enforcement, to bring together different competences that are divided in federal states like Germany. Additionally, Article 60 could enable law enforcement agencies to test new technology under real-world conditions, combining police databases and volunteers to focus on privacy-enhancing technologies instead of mass-surveillance tools.
Antonella Zarra investigates regulatory sandboxes under the AI Act from a law and economics perspective, raising the question of which extent sandboxes can correct market and government failures and enhance regulatory efficiency. Zarra identifies four aspects of market failure in the context of AI, addressing information asymmetries, imperfect competition, positive and negative externalities, as well as aspects of government failure in the regulation of AI. Drawing on the Collingridge dilemma, the author explains the tensions arising from the different paces of development of AI technologies and law, before analysing the lessons learned from regulatory sandboxes in the fintech sector. The article sees challenges and risks in the regulatory sandbox model under the AI Act, but points out that it may facilitate good AI governance by addressing the market and regulatory failures listed earlier. To enhance market access, regulatory learning, benefits for the societal understanding of AI, and other positive developments, sandboxes must avoid regulatory capture, the undervaluation of AI risks, and the risk of becoming “innovation theatre.”
Erik Longo, Filippo Bagni, and Fabio Seferi analysed policy challenges and strategic lines in the AI regulatory sandbox ecosystem. Following an examination of policy interests and their interaction with regulatory sandboxes, the authors turn their attention to the possibilities of a cross-border sandbox under the AI Act. They argue that a cross-border joint regulatory sandbox would streamline access and allow a comprehensive framework instead of separate schemes. Additionally, whereas Member State authorities may face capacity issues in establishing and running the sandboxes, a joint effort would allow them to pool their resources and expertise. Another point is that joint cross-border sandboxes could minimise the risk of forum shopping and facilitate the involvement of relevant actors via a single-entry point. The authors then focus on the implementation of cybersecurity guidelines within regulatory sandboxes and the interplay with the Cyber Resilience Act, and make a strong case for a joint regulatory sandbox to implement cybersecurity requirements.
4. Case studies and interdisciplinary perspectives
Helpful insights can be drawn from various international and interdisciplinary case studies of regulatory sandboxes. Established sandboxes in the field of data protection law, for example, show how important the unwritten factors of the self-image of the competent authority, the communication climate, and public reception are for successful sandbox design. Experiences from countries that have not yet implemented comprehensive AI regulation illustrate the opportunities, but also the risks, to which the AI Act is not immune.
In their case study, Lucas Costa dos Anjos and Pablo Leurquin analysed the Brazilian central bank’s practice of implementing financial innovation through regulatory sandboxes. The authors contextualised the predominantly positive Brazilian literature on promoting innovation through sandboxes within the broader discussion on AI regulation. The article convincingly examines the contrast and supposed antagonism between regulation and innovation. In many cases, regulation creates the environment in which innovations can succeed. In Brazilian law, the concept of sandboxes has been explicitly standardised since 2021. The competent authority is the central bank, and products in the field of finance and payments are being tested. The authors point out that by allowing temporary exemptions from existing legal frameworks, sandboxes often prioritise market-driven solutions over structured, long-term, and state-led innovation strategies. Furthermore, within the framework of sandboxes, private actors are credited with the majority of the initiative, which could cause supervisory authorities to risk weakening their own competence in these areas, thereby creating the danger of regulatory capture. The establishment and implementation of the regulatory sandbox by the central bank has proven to be complex and resource-intensive. In this context, the authors discuss the risks of privatising regulation, as economies such as Brazil’s may see a reduction in government oversight under the guise of promoting innovation. This is particularly true when supervisory authorities play a less active role in the sandboxes. The authors see this as a broader neoliberal regulatory trend where states prioritise deregulation and private sector empowerment over strategic governance. In their conclusion, the authors maintain a degree of scepticism, underscoring that while the promotion of innovation holds promise, the long-term success of sandboxes is contingent upon the ability to navigate potential risks and ensure that implementation does not compromise other public interests.
Armando Guio Español and Pascal Koenig argue that sandboxes should function as a tool for enhancing policymakers’ understanding of the development of technology instead of being seen solely as a space for experiments that promote innovation. AI poses challenges to regulators and policymakers that require a proactive learning process and the balancing of information asymmetries due to the opaque functioning of AI systems. The article focuses on the role of sandboxes in the absence of a comprehensive AI regulation from countries outside of the European Union legislation, discussing regulatory sandboxes as innovation incubators. The authors draw on different studies to point out that the success of regulatory sandboxes has been primarily measured by their ability to promote innovation and market access, even while these benchmarks are hard to define in detail and therefore vague. To foster regulatory learning and inform the policy cycle better, the article points to establishing collaboration between stakeholders and the opportunity to evaluate not only the technology, but also the rules for regulation. Six case studies from Brazil, Colombia, Mauritius, Mexico, Rwanda, and Thailand illustrate to what extent regulatory learning plays a role in legal regimes that do not have a comprehensive AI regulation.
Regine Paul and Heidrun Åm present the case of the “responsible AI” sandbox implemented by the Norwegian Data Protection Authority (DPA). The authors draw on an interpretive policy analysis of documents and semi-structured interviews with officials to identify role conflicts, and the effectiveness of the scope of conditions and challenges it perceived around sandboxes. They also focus on the dialogue and cooperation between the authorities and the providers within the regulatory sandbox framework and how authorities themselves perceive and implement the concept of sandboxes. The examination of the Norwegian DPA is particularly interesting, as this authority has established the most comprehensive regulatory environment for testing AI systems in Europe so far, and it has a reputation as a powerful regulator with a strong focus on privacy and fundamental rights protection. The authors’ findings hold promising insights for the implementation of sandboxes under the AI Act, emphasising how the DPA focuses on dialogue, advice, and mutual learning within the sandbox framework, deliberately distancing itself from the classic form of action taken by official orders. The recommendations are non-binding and solution-oriented; to date, no sandbox report has issued a clear recommendation for prohibition. Within the authority, the sandboxes are seen as an opportunity to discuss unclear issues relating to the GDPR based on practical cases, with regulatory learning also playing an important role. The selection of sandbox projects is also based on their usefulness to other actors and the public interest, which is where the DPA’s public relations work comes in: in addition to official reports, the authority produces seminars, podcasts, and newsletters, providing a potential role model in public communication for authorities under the AI Act sandbox regime.
Amir Cahane and Michael Sierra describe Israel’s policy approach to sandboxes in their case study. The authors trace the development of Israel’s AI policy strategy with regard to sandboxes, which are particularly aimed at promoting innovation in the high-tech sector, an area of relevance to the country. Using the example of sandboxes for autonomous driving and fintech, the authors trace the legal strategy and institutions involved. The key point of this strategy was the legislative amendment passed in 2022, which allowed autonomous driving on a trial basis, with eligibility extended to companies registered in Israel that met certain cybersecurity requirements. The responsible ministry must publicly release information about the participants, the number of autonomous vehicles, and the roads or areas affected. As in the AI Regulation, liability issues remain unaffected. In the fintech sector, the model is based on the approach tested by the FCS, which involved various regulatory institutions. The authors discuss whether AI risks can be adequately addressed in the context of sector-specific sandboxes and emphasise that the socio-economic impact should be given greater consideration along with the possibilities for public engagement and questions of data protection and privacy. The majority of the sandbox projects have been located in the public sector, with a recent focus on generative AI applications.
Christoph Bieber explores the case of the electoral sandbox in California from the perspective of political science. The author argues that modern elections can be conceived as socio-technical systems that rely in many ways on technical solutions, thus giving rise to a need for testing new technologies. The article illustrates how political actors usually deal with technological developments and challenges. Following the model of the electoral process in Germany, which the author sees as similar to an algorithm, the article analyses the rise of electronic electoral solutions in other countries. Even though the specific technological innovations, procedures, and tools are quite different, the design of rules and the actors involved are transferable to the regulation of AI. Bieber points out that the State of California has developed a robust system for integrating new technologies that could serve as a blueprint for AI systems in managing elections. The review and testing of the certification process for voting systems by the Office of Voting Systems Technology Assessment, in particular, shows agility and flexibility in the assessment of technologies and reveals patterns that are similar to regulatory sandboxes. The article concludes with starting points for lawmakers in AI regulation, emphasising cross-sectoral expertise, robust institutional infrastructures, room for discussions, and the participation of different actors, as well as firm political leadership.
5. Outlook
Regulatory sandboxes are not an end in themselves, and as such should not be presented as a symbolic response to the often-demanded modernisation of regulation, but rather as a proactive action on the part of regulators. It is imperative for authorities and lawmakers to extend beyond the confines of sandboxes and establish mechanisms that systematically guide the transformation of findings into the subsequent regulatory process. The case study from Brazil and the various analyses show that sandboxes harbour risks of a privatisation of regulation, a withdrawal of supervision, and a loss of expertise, as well as dangers for fundamental rights and consumer protection. In fact, few start-ups were promoted as well-established, large players in the market. This goes against the objectives set out in the AI Act for regulatory sandboxes, which is why it is important to focus on actual implementation.
The case study of the Norwegian Data Protection Authority, on the other hand, which does not provide for any legal exceptions to requirements or fines, but instead focuses on dialogue and cooperation, shows the importance of unwritten factors that cannot be defined by regulations. The Norwegian model works because of the strong position of the authority, mutual recognition, transparency, and public relations work, as well as the agency’s own image of itself in selecting projects that have a social benefit. The success of the sandboxes in the AI Act will therefore depend largely on the administrative, dialogue, and cooperation culture between supervisory authorities and providers in the respective Member States.
Sandboxes can only function as a tool for successful experimental regulation and, in the end, better AI governance, if they are designed to adhere closely to the objectives of regulatory learning, close informational asymmetries, and create knowledge about AI for law and policymakers, as well as society. They should neither function as a data protection discount nor a cheap exception from administrative fines nor raise the risk of regulatory capture, risk-washing, or an innovation theatre.
As mentioned in the beginning, this special issue is hopefully a kick-off to a broad legal and interdisciplinary discussion within a dialogue between different jurisdictions, as well as practice and legal scholarship.
I would like to express my gratitude to all authors and reviewers for their invaluable contributions, as well as to Cambridge University Press for their profoundly beneficial support throughout the publication process.