How can existing experiences of regulatory experimentation inform AI sandbox design in Europe? This paper explores the ‘responsible AI’ sandbox of the Norwegian Data Protection Agency (DPA), a GDPR-oriented regulatory experiment created in 2020 with four projects per annum. Through an interpretive policy analysis of documents (exit reports and workshop transcripts) and semi-structured interviews with officials, we explore how the Norwegian DPA approached its mandate of ‘helping with responsible innovation’, where it identified role conflicts, and what scope conditions and challenges it perceived around sandbox work. Sandboxing represented a ‘new way of working’ for the regulatory authority: in an idea-based intervention mode, the DPA moves from rule-based interventions as a watchdog to becoming a dialogue-oriented partner in solution-finding, a concretiser of ambiguous GDPR rules, and a keen learner from sectoral and technical experts. Critical engagement with our data suggests that sandbox design should not be reduced to technical and procedural questions. It requires regulators’ critical reflexivity on their ambivalent role and power relations in the regulatory experiment: how to strategically select relevant projects and issues, how to navigate budgetary constraints and the lack of follow-ups, and how sandboxing affects more interventionist regulatory duties.