Hostname: page-component-68c7f8b79f-xmwfq Total loading time: 0 Render date: 2025-12-18T23:04:43.843Z Has data issue: false hasContentIssue false

Systemic Risk Management and the Constitutional Limits of Delegating Political Discretion: An Analysis of the DSA and the AI Act

Published online by Cambridge University Press:  12 December 2025

Andrea Palumbo*
Affiliation:
Centre for IT and IP Law (CiTiP), KU Leuven, Leuven, Belgium Imec, Leuven, Belgium
Rights & Permissions [Opens in a new window]

Abstract

This article examines the systemic risk management regimes introduced by the Digital Services Act (DSA) and the Artificial Intelligence Act (AI Act) through the lens of EU constitutional law. Under both frameworks, private actors assess and mitigate systemic risks to public and private interests, while the European Commission acts as the exclusive supervisory and enforcement authority. Drawing on the Meroni doctrine and the case law on Article 290 TFEU, the article argues that these regimes delegate political discretion – among others, the authority to make normative decisions about contested public values. It explores the boundaries of political discretion reserved to the legislature under primary EU law. Based on this analysis, the article demonstrates how vague legislative definitions and broad discretion in systemic risk management enable regulated entities and the European Commission to make political choices that should remain within the competence of the EU legislator. By scrutinising how systemic risk management may exceed permissible limits on the delegation of power, the article provides a framework for assessing the legality of this emerging regulatory model in EU digital legislation. It concludes with a call for empirical research and normative guidance on how systemic risk management should be conducted in practice.

Information

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

I. Introduction

A salient feature of the modern regulatory state is the reliance on risk management as a central organising principle.Footnote 1 The last decades have seen the emergence of regulatory models where policy objectives are framed in terms of risks, and the public administration has increasingly devolved responsibilities to the managerial governance of private actors.Footnote 2 In this context, risk management has been defined as a guiding principle for states to regulate commercial activities.Footnote 3 Risk-based regulation has also become a common thread of different EU digital regulations, albeit with different manifestations,Footnote 4 through the imposition of obligations on regulated entities to manage the risks posed by their activities, such as automated personal data processing and the deployment of artificial intelligence.Footnote 5 Risk-based regulation is a symptom of a larger pattern to deal with a “risk society” in the modern state,Footnote 6 by adopting an approach centred around technocratic regulation in the attempt to deliver more efficient, objective and fair governance.Footnote 7

Risk-based regulation in EU law can differ on multiple levels, such as how risk management obligations are formulated, the public and private interests they seek to protect, and the supervisory and enforcement architecture surrounding the performance of risk management. Among different declinations of the risk-based approach in EU digital legislation, one stands out for the peculiar features it presents: systemic risk management in the Digital Services Act (DSA)Footnote 8 and in the Artificial Intelligence Act (AI Act).Footnote 9 With the entry into force of the DSA and the AI Act, the EU legislator has put in place a new framework to address the systemic risks posed by large online platforms and general-purpose AI models. Both regulations have introduced the new notion of systemic risk in EU digital legislation. Systemic risks can be posed only by certain intermediary services and AI models that disproportionately affect public and private interests, creating large-scale and systemic societal effects. The EU legislator thus defined a “tipping point” beyond which certain digital services and products acquire systemic relevance and warrant an ad hoc regime to mitigate their risks to society.

As is discussed in detail below, systemic risk management regimes present features that distinguish them from any other risk-based legislative framework in EU law, warranting specific considerations and raising their own questions and challenges.Footnote 10 While the reliance on risk regulation to pursue regulatory objectives is not a novelty as such, this paper builds on the observation that systemic risk management signals a shift to a new paradigm compared to other risk-based frameworks. The delegation of the responsibility to make normative choices, through the notion of systemic risk, on the interpretation and protection of vague and politically contested values has attracted the attention of legal scholarship. Existing scholarship examined the reliance on the notion of risk to address the (political) challenges posed by online disinformationFootnote 11 and more generally legal but harmful content.Footnote 12 Contributions have also discussed the consequences of framing policy and political questions in terms of (systemic) risks,Footnote 13 the implications of delegating normative choices to regulated entities in the institutional architecture of the DSA,Footnote 14 and how the blurring of the public–private divide in systemic risk management regimes challenges the rule of law.Footnote 15 The contributions mentioned above reveal a trend in existing scholarship to draw attention to, and raise questions about: (1) the peculiar features of systemic risk management as a regulatory approach, (2) the delegation to regulated companies of responsibilities to address political and policy issues through risk management, under the direct supervision of the executive, the European Commission. Existing contributions look at these points from different angles, but can be connected through the reference to a similar problem statement and the analysis of the same regulatory model.

In the wake of such prior scholarship, this article aims to look at the systemic risk management obligations laid down in the DSA and the AI Act from a different angle. In particular, it looks at the transfer of regulatory responsibilities that they entail from the perspective of primary EU law. By doing so, it aims to fill a gap in the state of the art by providing an analysis of known problems associated with systemic risk management regimes, on the basis of an evaluative framework that is soundly grounded in selected principles and provisions of primary EU law. This article thereby aims to provide three main contributions to existing literature. First, it examines how these obligations delegate regulatory responsibilities to the private actors they are addressed to, and which is the role entrusted to the European Commission as the exclusive supervisor and enforcer of systemic risk management obligations. Second, it assesses whether the imposition of the obligation to conduct systemic risk management can qualify as a transfer of the responsibility to exercise political discretion under primary EU law. Third, it evaluates whether such transfer of responsibility, and the overall regulatory model enacted for systemic risk management, would be compatible with primary EU law. In particular, it attempts to provide a comprehensive account of how the power relations and decision-making processes behind systemic risk management regimes challenge the EU institutional framework and checks and balances intended by the Treaties for EU law-making and policy making. The ultimate objective of this article is to provide a solid legal argumentation to frame the concerns voiced by existing literature on legitimacy, democratic representation and separation of powers in systemic risk management regimes as risks of violating primary EU law.

II. The systemic risk management frameworks of the DSA and the AI Act

The DSA and the AI Act share the notion of systemic risk, and both established ad hoc regimes for providers of services and products that pose the highest risks to public and private interests. These regimes create a specific risk category that revolves around the adjective “systemic,” i.e., their rationale is that additional and more stringent provisions are warranted for providers of services and products that pose risks presenting a systemic character. It must be noted that provisions on systemic risk management are only a part of the broader frameworks laid down in the DSA and the AI Act. They address only a specific subcategory of the services and products regulated under the two regulations. These subcategories are created on the basis of the specific risks posed to protected interests, as further explained below.

The systemic risk management framework of the DSA is laid down in its Chapter III, Section 5. It applies to a subcategory of intermediary services providers, namely very large online platformsFootnote 16 and very large online search engines (“VLOPSEs”).Footnote 17 On condition that their user base meets certain quantitative thresholdsFootnote 18 , these providers are subject to additional due diligence requirements. These requirements include the obligation to assess and mitigate the systemic risks stemming from their services’ design, functionality or usage. The obligation must be complied with by conducting cycles of risk management exercises. First, Article 34 prescribes the identification and assessment of systemic risks, by carrying out assessments at least once a year. Second, based on the outcome of such assessments, Article 35 requires providers of VLOPSEs to subsequently adopt measures to mitigate the identified risks. Risk assessment and mitigation are subject to external auditingFootnote 19 and take place under the direct supervision of the European Commission,Footnote 20 in what has been defined as a model of meta-regulationFootnote 21 or a polycentric regulatory model with elements of co-regulation.Footnote 22

The systemic risk management framework of the AI Act is set out in its Chapter V, Sections 1–3. It applies to providers of general-purpose AI models (“GPAIMs”)Footnote 23 with systemic risk,Footnote 24 a subcategory of GPAIMs that are designated as posing systemic risks due to their high impact capabilities, as determined based on a set of criteria laid down in the AI Act. A central concept for the classification of GPAIMs with systemic risk is thus that of “high-impact capabilities,” which is defined in the legislative text as “capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models.”Footnote 25 The classification as posing systemic risks is dependent on meeting at least one of two conditions: either having high-impact capabilities, or having equivalent capabilities or impact, based on a decision of the European Commission, ex officio or following a qualified alert from the scientific panel.Footnote 26 The obligations applicable to providers of GPAIMs with systemic risk are set out in Article 55(1)(b) that requires, among others, to assess and mitigate the systemic risks stemming from the development, placing on the market or use of their GPAIMs. As in the DSA, systemic risk management under the AI Act is directly supervised by the European Commission, that is conferred exclusive supervisory and enforcement powers.Footnote 27

Despite some differences, the systemic risk management frameworks of the DSA and the AI Act share common elements to the extent that they can be regarded as the expression of the same regulatory model.Footnote 28 For the purposes of this paper, the most relevant common features, that justify the inclusion of both frameworks in the scope of this paper, are as follows.

First, they both revolve around a similar notion of systemic risk, that despite a few differences presents the same distinguishing features under both regulations. In particular, systemic risks not only arise in relation to qualitative and legally well-determined private interests such as fundamental rights, but also to politically contentious and undetermined public interests such as public health, civic discourse, and the protection of “society as a whole.”Footnote 29 Therefore, they both lead to the substantiation of politically contentious and legally undefined public interests and values in the context of risk management.

Second, systemic risk management obligations only apply to services and products that are placed in the highest risk category within the risk taxonomy of both the DSA and the AI Act, in recognition of the fact that they can impact society on a large scaleFootnote 30 and affect a significant proportion of the EU’s population.Footnote 31 This scope of application relates to the systemic dimension of risks, and the thresholds to classify a service and product as part of the highest risk category draw the line between an “ordinary” risk and a systemic risk.

Third, under both frameworks, regulated entities carry out systemic risk management under the supervision of the European Commission, with several avenues of public–private collaboration and informal dialogue (such as codes of practice), and with the participation of advisory bodies that support the European Commission in the performance of its duties. Under the DSA, once VLOPSEs have assessed and mitigated systemic risks, they are subject to independent auditing at least once a year. The European Commission has exclusive powers to supervise and enforce the provisions on systemic risk management, but the European Board for Digital Services also plays an important role by providing support to the European Commission, including in the analysis of reports and results of audits, adopting opinions on VLOPSEs,Footnote 32 publishing comprehensive reports once a year on systemic risks and best practices on systemic risk mitigation.Footnote 33 Under the AI Act, the European Commission equally has exclusive power to supervise and enforce the provisions on systemic risk management,Footnote 34 with the advisory support of the scientific panel and of the European Artificial Intelligence Board. The scientific panel advises on whether a GPAIM presents systemic risks,Footnote 35 and more generally supports the European Commission in its supervisory and enforcement activities.Footnote 36 The European Artificial Intelligence Board provides advice to, and supports, the European Commission in the implementation, among others, of the provisions on systemic risk management.Footnote 37

The three common elements highlighted above not only justify the examination of systemic risk management regimes as the enactment of a single regulatory model, but they also distinguish them from other risk-based approaches in EU law. The reference to qualitative, value-laden and politically contentious interests to be protected through risk management marks a stark difference from most risk-based frameworks, such as those in the areas of financial and banking prudential supervision,Footnote 38 product safety,Footnote 39 regulation of chemical substances to protect human health and the environment,Footnote 40 biocidal products regulation, financial stability and cybersecurity.Footnote 41 While there are pieces of legislation that use the notion of risk to protect qualitative interests, such as fundamental rights,Footnote 42 systemic risk management is a rare example where also undefined, politically contentious public interests are in the scope of protection.Footnote 43 Moreover, another unique feature is the attribution of direct and EU-wide supervisory and enforcement tasks to the European Commission, which so far has only been experienced for the assessment and mitigation of risks to more quantifiable values, as in the areas of competition law and financial and banking prudential supervision.

A caveat to the considerations made above is that there is still significant uncertainty on the interpretation of the provisions on systemic risk management. There is currently limited guidance on the meaning of systemic risks and on the interpretation of the related risk management obligations. Under the DSA, the European Commission only issued guidelines on the mitigation of systemic risks to electoral processes.Footnote 44 Under the AI Act, the European Commission released guidelines on the scope of the obligations for GPAIMs,Footnote 45 which include guidance relating to the provisions on GPAIMs with systemic risk. However, the guidelines only provide guidance on procedural aspects about the designation of GPAIMs with systemic risk and on the scope of provisions applicable to GPAIMs. They do not provide guidance on what systemic risks are and how they should be assessed and mitigated.Footnote 46 The Code of Practice for GPAIMs addresses several aspects about the identification, assessment and mitigation of systemic risks, but does not constitute official guidance by the European Commission.Footnote 47 The absence of detailed guidance may be due to the recent adoption of both the DSA and the AI Act, and more clarification may be provided in the future as systemic risk management obligations are implemented and the European Commission takes supervisory and enforcement actions. As the DSA has entered into force approximately two years before the AI Act, the implementation of its systemic risk management framework is at a more advanced stage, whereas the relevant provisions of the AI Act only apply from August 2025.Footnote 48 The CJEU also had the opportunity to describe the rationale of systemic risk management provisions in its case law.Footnote 49 VLOPSEs published the first batch of the reports on systemic risk assessment and mitigation, as well as audit and audit implementation reports, in November 2024.Footnote 50 It is not yet clear whether these reports have met regulatory expectations, but they offer invaluable insights into the processes and the practices of VLOPSEs in assessing and mitigating systemic risks.

Based on the description of the three distinguishing features of systemic risk management regimes, as outlined above, Section IV looks at how these features entail the attribution and delegation of political discretion on the protection of public and private interests. This is in turn based on the examination, under Section III, of the constitutional limitations to the delegation of political discretion established under primary EU law.

III. Which limits are established by EU law regarding the delegation of political discretion?

1. Introductory considerations

The institutional framework of the EU is founded on the separation between the power to adopt and execute legislation. However, in modern public administration the executive is increasingly adopting acts of general application, regulating aspects that the legislator does not have the time and expertise to address.Footnote 51 Under the Lisbon Treaty, the executive can adopt acts of general application, as also explicitly envisaged under Articles 263 and 277 of the Treaty on the Functioning of the European Union (“TFEU”), which recognise that agencies can adopt acts of general application.Footnote 52 The Treaties, as interpreted by the CJEU over the years, require that certain powers and responsibilities are reserved to the EU legislator and cannot be delegated to other EU institutions and bodies or private entities. This is part of the more general notion of institutional balance, whose source has been located by CJEU in Article 13(2) TEU.Footnote 53 Part of legal scholarship sees institutional balance as a constitutional principle of EU lawFootnote 54 , but this has also been questioned by other scholars.Footnote 55 The CJEU explicitly recognised it as a principle that requires each of the institutions to exercise their powers with due regard for the powers of the other institutions.Footnote 56 In the institutional architecture envisaged by the Treaties, only the EU legislature can exercise political discretion,Footnote 57 and such discretion cannot be delegated to other entities. These limitations to the delegation of political discretion by the EU legislature have been originally set through the so-called Meroni doctrineFootnote 58 that has been developed since the seminal judgment of the CJEU Meroni v High Authority.Footnote 59

Discretion is the room for choice left to the decision-maker by some higher ranking source or authority.Footnote 60 The Meroni doctrine reflects the view of the CJEU on the discretion that the EU legislature can delegate. While commonly referred to in legal scholarship to evaluate the limitations to the powers that can be conferred to EU agencies, especially after the famous Short Selling judgment,Footnote 61 the Meroni doctrine was originally developed for a case concerning the delegation of powers to private actors. It affirms general principles that can apply to evaluate the legality of the delegation of powers from the legislature to different entities, whether public or private. The principles developed in Meroni have been subsequently partially codified with the Lisbon Treaty,Footnote 62 and Article 290 TFEU now lays down conditions for the delegation from the EU legislature to the European Commission of the power to adopt non-essential rules.

The Sections below outline the limitations imposed by the Treaties for the delegation of discretion by the EU legislature, which is a preparatory step to describe, in the following Section IV, the implications of these limitations for systemic risk management regimes. Section 2 aims to identify the criteria developed by the CJEU to draw the contours of political choices that cannot be delegated by the EU legislature, while Section 3 goes more in depth on the specific category of choices pertaining to the restriction of fundamental rights.

2. The distinction between political and administrative discretion in the case law of the CJEU

The first step is to identify the type of political discretion, and the type of policy choices, that are reserved to the EU legislature in the balance of institutional powers sought by the Treaties. While ultimately stemming from the Treaties, the factors that define political discretion have been almost entirely outlined by the CJEU in its case-law over the years. This case law relates to the development of the Meroni doctrine and to the interpretation of the term “essential elements” of a legislative act under Article 290 TFEU. The combined reading of all the relevant judgments allows to draw a picture, albeit with some blurred boundaries, of what choices political discretion encompasses under the Treaties.

Under the Meroni doctrine, the paramount feature of political choices is the exercise of wide discretion to balance conflicting interests and reconcile various objectives, ultimately taking decisions that go beyond a mere technical assessment based on objective criteria. This said, there can be grey areas between what constitutes a technically complex assessment and what constitutes political discretion,Footnote 63 and the distinction can be more or less clear on a case-by-case basis. According to what the CJEU stated in the judgments Meroni v High Authority and Short Selling – the latter being regarded as the newest standard on the limits to the delegation of powers from the legislature to other EU bodiesFootnote 64 – a political choice involves a balancing exercise between various objectives,Footnote 65 and thus goes beyond the mere execution of clearly defined powers. The CJEU emphasised in Meroni v High Authority that discretion can be political when it is not bound by, and cannot be subject to strict review in light of, “objective criteria”.Footnote 66 The element of objectivity, and the amenability to review in light of objective criteria, is an essential element that the CJEU also stressed in Short Selling.Footnote 67 When this element lacks, the discretion is wide to an extent that goes beyond the mere administrative execution of political determinations, and includes political determinations themselves. Should delegated powers enjoy such wide discretion, the delegation would constitute an actual transfer of responsibility where the choices of the delegator are replaced by the choices of the delegate.Footnote 68

The case law of the CJEU on Article 290 of the TFEU is also informative to understand the perimeter of political discretion.Footnote 69 When the CJEU was called upon to define the meaning of “essential element” of a legislative act, it referred to “political choices”,Footnote 70 thus recognising that essential elements are the result of a political choice by the EU legislature.Footnote 71 Importantly, the definition of political discretion in this area resonates with the one given under the Meroni doctrine, as political choices are those that require “the conflicting interests at issue to be weighed up on the basis of a number of assessments.”Footnote 72 This criterion is vague enough to render the distinction between legislative and non-legislative matters difficult in practice, but further delving into the case law of the CJEU provides additional guidance. The limits of political discretion change depending on the subject matter at hand, and its sensitivity to constitutional values and high politics, such that it would be different in the area of border security as opposed to the common agricultural policy.Footnote 73 However, it is possible to single out three core features that characterise political choices and essential elements of legislation.

First, political choices are strictly interconnected with an essential objective of the relevant policy area, and thus they cover aspects that define or implement central elements of said objective.Footnote 74 When a provision gives concretisation to how a policy objective is pursued, it falls under the remit of the normative choices done by the legislature in determining how the objectives of the Treaties should be pursued. This criterion, however, leaves room for significant uncertainty in deciding on a case-by-case basis when an element is strictly connected to policy objectives.

Second, political choices involve the weighing up of interests and the reconciliation of different objectives,Footnote 75 and in turn entail settling controversial problems for which different opinions have been or could be expressed by different parties.Footnote 76 This aspect goes to the core of one of the functions of political deliberation, i.e., reconciling different views to address controversial issues. This is also consistent with the case law in Meroni and Short Selling, where the CJEU stressed the importance of amenability of delegated powers to judicial review based on objective criteria. The objective character of administrative decision-making thus excludes that politically controversial decisions can be delegated.

Third, the CJEU identified specific elements that, by their nature, fall under the scope of political discretion reserved to the legislature. Among these, there is the choice that fundamental rights can be interfered with in pursuit of a conflicting objective, as well as choices on conducting international relations on behalf of the EU.Footnote 77 The fact that certain restrictions to fundamental rights should be laid down in legislation is of central importance for the purposes of this article. Therefore, this category of political choices is discussed in detail below.

3. Specific instances of political choices: interferences with fundamental rights

More precise guidance on the perimeter of political discretion comes from the judgments where the CJEU clearly indicated which choices require, in relation to the facts of the proceedings, political discretion. This Section focuses specifically on the choice to interfere with fundamental rights. There are two judgments where the CJEU held that when a provision entails interferences with fundamental rights, it may relate to an essential element of a legislative act and constitute a political choice.Footnote 78 The first and seminal judgment in Case C355/10, with which the CJEU aligns in the other and subsequent judgment, provides the most elaborate reasoning in this regard. The case concerned an action for annulment of a Council decision regarding surveillance of the external sea borders. In particular, the impugned decision covered the sensitive matter of the enforcement powers conferred on border guards. It laid down rules defining the powers to adopt measures against ships or other seacraft, including making seizures, apprehending persons and conducting apprehended persons to a specific location.

The CJUE had to assess whether these rules, that entailed significant interferences with the fundamental rights of the persons subject to the measures, covered essential elements reserved to legislative acts. The CJEU found that the provisions conferring such powers on border guards entailed interferences with fundamental rights, to an extent that the involvement of the EU legislature was required.Footnote 79 In other words, both the fact that fundamental rights were restricted and the degree of gravity of such restrictions led to a qualification of the provisions as reflecting political choices that should be made by the legislature. It is appropriate to note that the CJEU considered a decisive element that the contested decision attributed powers to impinge, if need be, on fundamental rights, and that the conferral of powers as such is problematic, because their exercise requires authorisation.Footnote 80 It is also important to note that whether fundamental rights would be restricted in practice was not relevant, as it sufficed that powers were conferred that could lead to such outcome.

This judgment provides objective criteria, i.e., the effects on fundamental rights, to define the boundaries of political discretion, as well as a clear qualification of certain choices as reserved to the legislature. However, it also introduces an element of uncertainty by indicating that the extent of interference with fundamental rights is a relevant factor to conclude that the involvement of EU legislature is required. While it is evident that the degree of interference was grave in the case at hand, for future applications of the same criterion the question arises as to which is the threshold that makes an interference grave enough to justify a legislative reserve. Sufficiently invasive interferences would necessarily include those that do not impinge upon the essence of fundamental rights,Footnote 81 given that the political choice must pertain to a situation where fundamental rights can be restricted to protect a conflicting interest, as in the facts of the judgment. Therefore, it is not clear how a hierarchy of interferences can be established to distinguish between essential and non-essential elements of a legislative act. Moreover, another question is whether the nature of the fundamental right restricted is relevant for the assessment, i.e., if restricting certain fundamental rights rather than others makes a choice more likely to be considered political. In the context of the judgment, coastguards could take measures restricting the liberty of persons, which is one of the most important fundamental rights recognised by constitutions worldwide. It is also not clear if, for the purposes of identifying political choices, there is a hierarchy among fundamental rights, that sees certain rights as more politically implicated than others.

In conclusion, at least for some restrictions to fundamental rights, the normative choices on whether a restriction is justified in a given case, and how, are reserved to the EU legislature. This includes all the underlying considerations on the balancing between conflicting objectives and interests, on the one hand, and fundamental rights, on the other hand. In particular, it is the balancing against conflicting objectives and interests that may justify the imposition of an interference with a fundamental right, and thus goes to the core of the exercise of political discretion.

IV. Political choices in systemic risk management: giving shape to a legitimacy concern

1. Introduction: political discretion in systemic risk management

This Section examines how systemic risk management may be performed in ways that are incompatible with the limitations imposed by the Treaties on the delegation of political discretion from the EU legislature. By doing so, it aims to give legal grounding to the legitimacy issues discussed to date in relation to systemic risk management regimes. This Section builds on the description made in Section 3 of the choices that the CJEU identified as being part of political discretion under primary EU law.

The relevance of the case law outlined in Section 3 for the analysis of systemic risk management regimes has been established as follows. First, not only Article 290 TFEU imposes limitations to the delegation of responsibilities to the European Commission as the regulator overseeing systemic risk management, but it is also a relevant benchmark to evaluate the delegation of responsibilities to private actors. While Article 290 TFEU applies to the specific case of the European Commission supplementing or amending elements of legislative acts, it also draws the demarcation line between legislative and non-legislative matters.Footnote 82 In other words, it provides guidance to settle the long-standing question of distinguishing in EU law between “what is political and what is technical.”Footnote 83 This demarcation is relevant for any delegation of powers from the EU legislature, including when supervised private actors are required under a legal obligation to exercise political discretion in order to pursue regulatory objectives. Second, while legal scholarship has, over time, mainly discussed the Meroni doctrine in relation to the delegation of powers to EU agencies, this doctrine also affirms more general principles on which choices reflect political discretion and should be reserved to the EU legislature. In particular, in the judgment Meroni v High Authority the CJEU assessed the legality of a delegation of powers from the High Authority to private actors. The issue of institutional balance and delegation of powers was thus addressed for the first time in the history of EU law for the transfer of responsibilities to a non-public body.

The Sections below describe how systemic risk management regimes entail the attribution and delegation of normative decision-making responsibilities on the protection of public and private interests. In particular, they look at two features of these regimes: (1) private regulated entities are in the driving seat for systemic risk management, and are entrusted with the task to make normative choices on the interpretation and protection of qualitative and politically contested interests and values, (2) as the designated supervisor and enforcer, the European Commission can play a crucial role in defining the interpretation and protection of such interests and values. By describing these two modalities for the attribution of regulatory responsibilities, the Sections below attempt to explain whether, and how, the DSA and the AI Act have shifted to systemic risk management cycles certain determinations that should be reserved to the legislature.

2. Defining the undefined: discursive construction of politically laden concepts outside the legislative process

The fact that the provisions on systemic risks of the DSA are vaguely formulated has been acknowledged or criticised by several scholars and freedom of expression activists.Footnote 84 It has also been argued that leaving the DSA systemic risk management regime as open-ended as possible could be a deliberate choice of the EU legislator, with the aim to provide for flexibility to address the diverse regulatory needs that could arise in the implementation phase.Footnote 85 The same considerations can be made in relation to the systemic risk management regime of the AI Act, that is similarly as open-ended. As with any vague and abstract legal provision, the actors involved in the implementation of systemic risk management provisions are effectively entrusted with the responsibility to determine how key concepts should be interpreted in practice. This already marks a delegation of responsibilities by the legislature, as the decision not to define aspects of a legal framework in the legislative act leads to a delegation of the power and duty to define such aspects at a lower level.

Abstract and open-ended provisions are not uncommon in EU law. In the area of media law and platform regulation, the CJEU has held that provisions formulated in open terms may be necessary to keep pace with changing circumstances,Footnote 86 as may also be necessary to give regulated entities wide discretion to determine which measures best achieve regulatory objectives in practice.Footnote 87 This at times takes place by relying on risk management obligations. Risk regulation has been largely used in fast-paced and complex sectors, and has inevitably led to the attribution of normative choices to regulated entities. These include choices on determining, balancing and prioritising objectives and values,Footnote 88 on the conceptualisation and assessment of what risks are and how they can affect protected objectives and values.Footnote 89 As noted above, it is also not uncommon to require risk managers to protect qualitative interests.

However, a distinctive feature of systemic risk management regimes is the inclusion of societal and public interests that are qualitatively difficult to define since they are politically contested and highly dependent on subjective evaluations.Footnote 90 This is the case for interests such as civic discourse, public health, electoral processes, public security and many other societal interests that may be deemed worthy of protection through systemic risk management, such as sustainability.Footnote 91 These interests are not only difficult to quantify but also to define, as they are not clearly established legal concepts in EU law, but rather politically laden programmatic values. This marks a difference from fundamental rights, which are well-defined legal concepts and entitlements for which endless interpretive guidance can be found in legislation, case-law and doctrine.

Despite the existence of some relevant legal formulations, such as for public health in Article 168 of the TFEU, the majority of the interests to be protected through systemic risk management have usually found concretisation in political decision-making fora rather than in the courtroom and legal practice. Therefore, the issue lies not only in the absence of a legal definition for these terms, but also in the intrinsic difficulty to objectively define values that are largely subjective and contested. For instance, there may be different views on what functional civic discourse is.Footnote 92 To make this issue more tangible, it is possible to look at the long-standing debate about addressing the risks posed by disinformation and misinformation to civic discourse. Different opinions can be expressed on the potential regulatory solutions, diverging on several points such as whether harmful content should be restricted, how it should be restricted, and whether alternative paths should be prioritised including raising awareness and digital literacy. Besides the legal constraints on permissible interferences with freedom of expression, for certain questions there is no “right” or “wrong” answer, and they should be settled based on political orientations. Existing scholarship has acknowledged that systemic risk management is unknown territory,Footnote 93 and that it lacks methodological consensus on how it should be conducted.Footnote 94 While methodological guidance and consensus among involved actors may be achieved in the future, at current a framework still needs to be developed, and normative choices must be made therefor.

The decision to leave certain aspects undefined, combined with the difficulty to formalise the values referred to in systemic risk management regimes, signifies the delegation of political discretion to the actors involved in risk management exercises, primarily regulated entities and the European Commission. These choices encompass several aspects, from the definition of protected values, to the methodologies for risk assessment, to setting standards of protection through risk mitigation measures, including the balancing of conflicting values in practice. In line with other pieces of EU legislation, such as the GDPR, once that risks and their impact on protected values have been conceptualised, risk acts as a proxy to protect and balance constitutional interests.Footnote 95 However, as systemic risk management regimes aim, contrary to other risk-based frameworks, also to protect politically controversial values, the nature of the discretion attributed to risk managers is also political. On the basis of the considerations made in Section III above, there are three main reasons to support this argument.

The first relates to the element of objectivity. Both the case law on Article 290 TFEU and the Meroni doctrine affirm that delegated decision-making must be amenable to judicial review based on objective criteria. As argued in this Section, objectivity is an element that lacks when it comes to the interests to be protected through systemic risk management. For instance, objectivity on how risks to functional civic discourse are assessed is hardly achievable at current. Discretion in systemic risk management is very wide and requires reconciliation of different constitutionally and politically relevant objectives and values, with little legislative guidance, which does not seem reconcilable with the idea of clearly defined executive powers and objectively verifiable discretion.

Second, the decisions to be made in systemic risk management appear to be strictly interconnected with the core objectives pursued by the EU legislator with the DSA and the AI Act, which is another feature of political discretion, as described above.Footnote 96 If the policy objective is to protect public and private interests against the risks posed by VLOPSEs and GPAIMs, the real concretisation of this objective is almost entirely left to systemic risk managers. This can be affirmed in light of the fact that there are several ways in which systemic risk management could be carried out, including several ways to understand the protected interests that have been explicitly outlined in EU legislation and those that are still to be identified. Systemic risk managers are left wide discretion both on the “what” of systemic risk management, i.e., the definition of the interests to be protected, as well as on the “how.” For instance, including sustainability among protected interests, and deciding to what extent regulated entities should include sustainability concerns in their risk management processes, is a central element of how the underlying policy objective is effectively pursued. There can be different ways to perform systemic risk management depending not only on how systemic risks evolve but also on the political priorities existing at a given time. For instance, the regulatory approach to disinformation can change depending on the understanding of what a functional democracy is, on the balancing between freedom of expression and public interests, on the perception of geopolitical threats, and so on.

Third, regulated entities decide on whether and how fundamental rights can be restricted to mitigate systemic risks. As explained above,Footnote 97 sufficiently grave interferences with fundamental rights should find their basis in legislation, and cannot be decided by actors outside of the EU legislature on the basis of a delegation of powers. This is discussed in detail in the Section below, which explains how systemic risk managers may restrict fundamental rights when adopting risk mitigation measures.

3. Ruling through the middleperson: restrictions to fundamental rights in systemic risk mitigation

A peculiar feature of systemic risk management regimes is that they may not only require regulated entities to change their conduct, but also to govern the conduct of third parties, which can entail interfering with the fundamental rights of the latter. This Section examines how this feature can involve the exercise of political discretion through the discursive framing of the justification(s) for restrictions to fundamental rights and the related balancing exercises. However, it also acknowledges that it can raise similar problems from different angles, including the rule of law and the legality principle enshrined in Article 52(1) of the Charter of Fundamental Rights of the European Union.Footnote 98 The latter requires that any interference with fundamental rights be “provided for by law,” which may not be the case when choices on the justification for interferences are made in risk management cycles. While this Section only looks at the limitations to the exercise of political discretion, in line with the scope of the paper, it acknowledges that the delegation of decisions involving restrictions to fundamental rights is a multi-faceted problem that creates points of friction with different principles and provisions of primary EU law.

The concrete modalities in which systemic risk management can interfere with fundamental rights differ depending on whether the systemic risks are to be mitigated on online platforms, search engines or GPAIMs. The most evident examples where regulated entities may have to interfere with fundamental rights relate to the mitigation of systemic risks on online platforms. In particular, there are several ways in which systemic risk mitigation measures on online platforms and search engines could interfere with the freedom of expression of online users. In mitigating systemic risks, VLOPSEs may need to moderate both illegal and legal content. This is contemplated by the legislative text and the recitalsFootnote 99 of the DSA, which mention the need to carry out content moderation and adapt recommender systems.Footnote 100

Mitigating measures can include, among others, removal, demonetisation and restriction of the visibility and dissemination of illegal and legal online content.Footnote 101 Similar measures are contemplated in relation to content qualifying as disinformation under the Code of Conduct on Disinformation.Footnote 102 When these measures are adopted to restrict harmful but legal online content, the actors involved in systemic risk management play a crucial role in determining the justification and basis for the interference with freedom of expression. In the case of illegal content, the content is qualified as illegal on the basis of another legal act, usually a legislative act. Thus, the determination on the boundaries of free speech in relation to this type of content has already been made by the national or EU legislator, and VLOPSEs implement measures to ensure that this content is moderated in line with its qualification as illegal. However, legal content that is deemed to be generating or contributing to systemic risks is restricted or otherwise regulated for the sole reason that the actors taking decisions on systemic risk management, primarily regulated entities, believe it is appropriate to intervene.Footnote 103 While any interference ultimately stems from a legal obligation, the vague formulation of the provisions on systemic risk management, and the intrinsic difficulty to objectively define and assess risks to certain values, leave systemic risk managers with very wide discretion to decide on whether fundamental rights can or cannot be restricted. These decisions are taken by relying on systemic risk as a proxy, and thus most of the determinations on what makes an interference with freedom of expression justified and proportionate are made within risk assessment and mitigation processes. The baseline framework for these determinations, as laid down by the EU legislator, is limited to a generic description of systemic risks and the reference to flexible concepts such as reasonableness and proportionality.

A clear example where risk is used as a regulatory tool to guide constitutional determinations for harmful but legal content is disinformation. In order to tackle amplification-based harms, online content that qualifies as disinformation is not regulated as problematic per se but only when generating systemic risks, for instance due to the circumstances of its dissemination. The difference between harmful and not harmful online content is thus the correlation with systemic risks, and it is therefore almost exclusively delineated through systemic risk assessments by regulated entities in collaboration with public actors. As a consequence, key aspects over which grounds can justify interferences with fundamental rights on online platforms are determined at a lower level than EU legislation. It must be noted, however, that the obligation of VLOPSEs to restrict legal content under the DSA, and the ability of the European Commission to require so, result from a specific interpretation of Article 35 of the DSA. European courts have never confirmed this interpretation, and it has been argued that Articles 34 and 35 of the DSA should be seen as a “limited risk management” regime that cannot be interpreted as requiring the imposition of new concept-specific restrictions.Footnote 104 While it is too early to confirm which is the correct interpretation, even in a limited risk management regime the adoption of content-neutral systemic risk mitigation measures can lead to interferences with freedom of expression and entail the regulation of the conduct of online users. For instance, it may be difficult to design risk mitigation measures for the systemic risks posed by disinformation that do not somehow discriminate against certain content, and VLOPSEs may decide to restrict legal contentFootnote 105 as part of their systemic risk mitigation strategies even if that is not required by the European Commission. A reading of the transparency reports on systemic risk mitigation practices implemented by major VLOPs confirms that restrictions to legal content are already being reported as systemic risk mitigation measures.Footnote 106

The de facto entrustment of the responsibility to decide on whether the enjoyment of fundamental rights should be restricted to mitigate systemic risks is more evident under the DSA. However, it cannot be excluded that a similar situation could materialise under the AI Act. As GPAIMs are not online fora where users interact, post and react to content, they do not play a role of intermediate regulator for the conduct of users comparable to that of VLOPSEs. However, GPAIMs can be used for countless use cases, and there may be scenarios where providers are called to mitigate systemic risks by adopting measures that interfere with the fundamental rights of third persons. For instance, it has been observed that there are functional analogies between certain GPAIMs and search engines.Footnote 107 LLMs can play an important role in enabling or hindering access to information for a large number of users, either on their own or when integrated in a search engine.Footnote 108 Existing scholarship has acknowledged that search engines can have important implications for freedom of expressionFootnote 109 , including in relation to the ability to seek and impart information through the search results that they provide. For example, the Committee of Ministers of the Council of Europe stated that filtering and de-indexation of online content by search engine providers can lead to a violation of the freedom of expression of providers and users of such content.Footnote 110 These considerations can be equally made, mutatis mutandis, for certain GPAIMs. Besides the analogies with search engines, GPAIMs can be used in several applications to support or entirely take decisions that can affect fundamental rights. In this regard, systemic risk management influences the design of tools that can exert their own normative power when used to perform decision making.Footnote 111

The crucial finding is thus that the EU legislator has relied on the risk management expertise of regulated entities to decide when and how it is appropriate to restrict the enjoyment of fundamental rights. Regulated entities decide on the cases where it is justified to restrict fundamental rights to protect a conflicting value – such as civic discourse and public health. This is a type of delegation that goes beyond the concretisation of politically contested values by regulated entities to govern their own conduct. As explained above,Footnote 112 the CJEU held that providing for certain restrictions to fundamental rights should be reserved to the political discretion of the EU legislator. As is clear from one of the relevant judgments, defining the modalities in which fundamental rights can be restricted entails political choices falling within the responsibilities of the EU legislature.Footnote 113 This opens the door to the argument that the DSA, and potentially also the AI Act, delegate political discretion to regulated entities and regulators by conferring wide discretion in making choices that impact the enjoyment of fundamental rights. The delegation of political discretion is most evident when regulated entities and regulators de facto make the most crucial determinations on the justifications for restrictions to fundamental rights, as in the case of harmful but legal content that poses systemic risks.

This qualification as political choice is subject, however, to a sort of de minimis rule regarding the degree of the interference.Footnote 114 While it is not clear which is the relevant threshold, there are elements to argue that the involvement of the EU legislature is needed for the type of restrictions that could take place under the DSA. Content moderation decisions on legal content constitute a serious interference with freedom of expression, especially when they lead to content removal. In addition to the individual cases where freedom of expression is restricted, the cumulative, collective and societal effects are also relevant to assess the gravity of the interference. The systematic application of content moderation policies can generate chilling effects that deter online expression,Footnote 115 as well as affect democracy and civic discourse on a societal level.

The main objection against the argument that systemic risk management can lead to political choices is that the power to restrict fundamental rights has been conferred by EU legislation directly, which not only provides indications to identify systemic risks but also requires systemic risk mitigation measures to be reasonable, proportionate and considerate of the impact on fundamental rights.Footnote 116 A closer look at how systemic risk mitigation may unfold in practice, however, shows that this objection would not be valid. Given the multi-faceted, politically laden and subjectively sensitive nature of protected interests, there is not a single way to decide in which cases systemic risks justify restricting fundamental rights. After all, risk managers are those who assess and mitigate systemic risks, and by deciding on their gravity they can also decide when interference is justified or not. It could be said that demoting disinformation, or merely labelling content as disinformation, or not intervening at all are appropriate systemic risk mitigation measures, and the decision rests solely on risk managers. The notions of reasonableness and proportionality are also too elastic in this regard, as they can bend depending on the subjectively perceived gravity of systemic risks.

4. Technocracy over politics: systemic risk management as a regulatory tool

The delegation of responsibilities underlying systemic risk management regimes is not only about different actors making decisions, but also about different procedures, terminologies and methodologies to solve regulatory issues. As other risk-based frameworks, systemic risk management regimes reflect the choice to defer certain questions to depoliticised rationality.Footnote 117 The EU legislator identified systemic risk management as the best instrument to pursue the relevant regulatory objectives of the DSA and the AI Act. The consequence of this legislative choice is that the related political and policy questions will be framed and addressed within risk management processes, as opposed to political discourse and legislative processes. It is thus a shift to an entirely different way of identifying, analysing and addressing problems.

The outsourcing of regulatory tasks to private commercially-oriented actors entails risks of corporate capture and conflicts of interest that can be found in several legal frameworks,Footnote 118 even those that are not risk-based.Footnote 119 The outsourcing of the types of tasks that systemic risk management requires to carry out under the DSA and the AI Act also carries the risk of depoliticising decision making, disguising political choices under seemingly neutral technical processes. In relation to the DSA, Griffin talks about how the reliance on risk as a regulatory tool obscures political conflicts around platform regulation and privileges certain forms of technical knowledge to understand and deal with policy issues.Footnote 120 Discussing how disinformation should be tackled in a democratic legislative process may lead to a different approach, and eventually a different outcome, compared to a situation where solutions to the same problems are sought by private risk managers. Framing policy issues in terms of risks also puts the professionals with the relevant expertise to perform risk management in the driving seat,Footnote 121 and requires any contestation to take place by relying on the same risk management logic and terminology.

In other words, systemic risk discourse governs the framing of policy issues and thus leads to systems of knowledge construction coming from the relevant scientific and professional domain.Footnote 122 From a Foucauldian perspective, risk management can be seen as a constitutive practice of neo-liberal governmentality to structure and legitimate forms of control over individuals and institutions.Footnote 123 The differences between risk discourse and legislative processes to make determinations on policy questions further qualify the nature of the devolution of power and responsibilities from the EU legislature to systemic risk management processes. In particular, they increase the degree of discretion delegated to risk managers. It is not only a delegation to them as decision makers, but also to their methods and internal procedures. This signifies not only the opportunity for regulated entities and regulators to make systemic risk management “their own” by internalising it, but also higher hurdles for contestability by whoever is outside the relevant area of expertise.

5. Public regulation of private ordering: the European Commission as meta-regulator

While regulated entities are in the driving seat, the European Commission plays the important role of overseeing whether systemic risk management is carried out in line with legislative requirements. Reflecting a regulatory model often named as “meta-regulation,”Footnote 124 in systemic risk management regimes the internal self-regulation of companies is supervised and harnessed to pursue public policy objectives.Footnote 125 The European Commission has at its disposal an array of supervisory and enforcement powers to influence, both ex ante and ex post, how regulated entities assess and mitigate systemic risks.Footnote 126 This can take place through informal collaboration and dialogueFootnote 127 – for instance by promoting codes of conduct or communicating with regulated entitiesFootnote 128 – or by taking more formal supervisory and enforcement action.

The issue of public–private entanglement in content moderation is well-known to academic literature.Footnote 129 Most of this literature however precedes the DSA, that formalised in a new model the public–private cogeneration of risk management policies. As mentioned above, the role of the European Commission as EU-wide enforcer is an important feature that defines how normative choices are made. As systemic risk assessment and mitigation is the avenue where politically and constitutionally relevant choices are made, the European Commission as meta-regulator has the power and responsibility to steer how these choices are made.

This role of the European Commission completes a picture that the previous Sections started to draw, where systemic risk management has shifted power and decision-making away from legislative processes and judicial adjudication, into the hands of private technocratic expertise and executive power. For the purposes of this paper, the meta-regulatory function of the European Commission is important for two reasons. First, it is a piece of the puzzle that shows how regulated entities do not take decisions in isolation, but in collaboration and under the supervision of a public body. The most important implication in this regard is that the European Commission is equally given political discretion for the choices to be made through systemic risk management. This is true at least for its contribution to decision-making alongside regulated entities. Second, like regulated entities, the European Commission is not part of the EU legislature, but it is the central institution of the EU’s executive branch. In the institutional architecture of the EU, it plays a different role than the legislature and has both different democratic legitimacy and connections with political processes. For this reason, in the spirit of institutional balance, the Treaties require that certain policy choices are reserved to the legislature and are not delegated to the European Commission, as explicitly required by Article 290 of the TFEU.

The delegation of political discretion to the European Commission introduces additional considerations to those already made for the devolution of responsibilities to private actors. In particular, it affects the separation and distribution of powers among the institutions of the EU, and therefore the principle of institutional balance enshrined in Article 13(2) of the TEU.Footnote 130 It is important to note that the attribution of extensive direct supervisory and enforcement responsibilities to the executive branch or to independent agencies is not a novelty in the EU regulatory architecture. It has been the case in other areas, from financial regulation to antitrust. The reason why it signifies a novel institutional development is the nature of the decisions that systemic risk managers are called to make, as discussed in the Sections above.

V. Conclusions

The considerations made thus far in this article lead to two principal findings. The first is that systemic risk management regimes in the DSA and the AI Act confer political discretion upon systemic risk managers, primarily regulated entities and the European Commission. The precise nature and extent of this discretion depend on how risks are assessed and mitigated in practice, as is the case when systemic risk mitigation can lead to interferences with fundamental rights. Nevertheless, systemic risk management necessarily involves decisions concerning the interpretation and protection of public interests, which, as illustrated above, requires the making of political choices. The second is that the delegation of political discretion is not compatible with EU constitutional law, under which such discretion is reserved to the EU legislature. Although there is no explicit provision prohibiting the delegation of tasks to private actors, such a limitation logically follows from the existence of a legislative reserve for political choices. Further, the delegation of tasks to regulated entities also involves a transfer of responsibilities to the regulator that closely supervises the same entities, which in this case is the European Commission.

At first glance, these conclusions appear to present a purely critical stance. However, the purpose of this article is both to critique and to provide insights that can guide the future implementation of systemic risk provisions. As part of the criticism, in line with the concerns already raised in previous scholarship,Footnote 131 this paper draws attention to the political nature of the choices that systemic risk managers are required to make, and to the consequences of relying on risk management as an area of expertise to inform these choices. It seeks to advance the existing debate by offering clear, legal grounding for concerns surrounding the delegation of political discretion. It does so by describing what a political choice is under primary EU law, through a delineation of its contours based on the consolidated case-law of the CJEU.

By relying on the constitutional limitations on the delegation of political discretion as an evaluative framework, this contribution attempts to provide some clarity on what risk-based regulation can and cannot do, and ultimately, evaluate the legality of systemic risk management as a regulatory approach. It must be acknowledged, nonetheless, that delineating the perimeter of political discretion still presents uncertainties in some respects. The CJEU has never adjudicated on political discretion in risk-based regulation, and the principles affirmed in indirectly relevant judgments have been translated and applied, in their abstract formulations, to the provisions under analysis. As a forward-looking reflection, this paper does not intend to entirely dismiss systemic risk management as a regulatory tool. Digital services and products do pose systemic risks that need to be addressed, but certain determinations should be left to legislative deliberation. The attribution of certain choices to the legislature may present its own challenges, not only due to the potential inability of the law to adapt to changing circumstances but also to the risks of exposing certain decisions to political bias. This paper does not aim to discuss the merits per se of attributing discretion to political bodies, as the focus is on the limits established in EU law as interpreted by the CJEU. The paper does not evaluate the merits of these limits, but it builds on them as an evaluative framework to find legal grounding for the legitimacy concerns pertaining to systemic risk management regimes.

The findings outlined in this paper can provide guidance on which choices should be avoided in systemic risk management. They also invite reflection on the role played by systemic risk management, and the need to increase legitimacy for the choices made therein. Systemic risk management regimes are a relative novelty in the DSA, and an absolute novelty with the AI Act. Empirical research is needed to understand how normative choices are made through the assessment and mitigation of systemic risks, in combination with doctrinal evaluative and normative research to guide systemic risk management in the future. In this regard, this contribution also calls for more research on the interplay between EU constitutional law and risk-based regulation, to understand how the way risk management is conducted in practice challenges the institutional balance and checks and balances foreseen by the Treaties. While this article focuses on the DSA and the AI Act, the findings on the interplay between risk-based regulation and the EU constitutional framework may have relevance beyond these two regulations and the field of digital law more generally. As noted above, risk management and reliance on the expertise of regulated entities to pursue public policy objectives is a trend of the modern regulatory state. To understand the limitations to the delegation of political choices under EU law is an essential first step to inform the design of risk-based regulation.

References

1 J Black, “The Emergence of Risk-Based Regulation and the New Public Risk Management in the United Kingdom” (2005) Public Law 510, 510; Michael Power, The Risk Management of Everything: Rethinking the Politics of Uncertainty (Demos, 2004); Ulrich Beck, Risk Society: Towards a New Modernity (Mark Ritter tr, Sage Publications 1992).

2 J E Cohen and A E Waldman, “Introduction: Framing Regulatory Managerialism as an Object of Study and Strategic Displacement” (2023) 86(3) Law and Contemporary Problems.

3 C Hood, H Rothstein and R Baldwin, The Government of Risk: Understanding Risk Regulation Regimes (Oxford University Press 2001).

4 G De Gregorio and P Dunn, “The European Risk-based Approaches: Connecting Constitutional Dots in the Digital Age” (2022) 59(2) Common Market Law Review 473–500.

5 M E Kaminski, “Regulating the Risks of AI” (2023) 103 Boston University Law Review 1347.

6 W Leiss, U Beck, M Ritter, S Lash and B Wynne, “Risk Society, Towards a New Modernity” (1995) 19(4) Canadian Journal of Sociology Cahiers Canadiens de Sociologie 544.

7 B Hutter, “What Makes a Regulator Excellent? A Risk Regulation Perspective” [2015] Paper Prepared for the Penn Program on Regulation’s Best-in-Class Regulator Initiative June.

8 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1.

9 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L2024/1689.

10 For an analysis of the distinctive features of systemic risk management as a regulatory approach, see: A Palumbo, “Charting systemic risk management as a regulatory paradigm in EU digital legislation: features, challenges and directions for future research” (2025) Technology and Regulation (forthcoming).

11 G De Gregorio and O Pollicino, “The European Constitutional Way to Address Disinformation in the Age of Artificial Intelligence” (2025) German Law Journal 1.

12 A Palumbo, “A Medley of Public and Private Power in DSA Content Moderation for Harmful but Legal Content: An Account of Transparency, Accountability and Redress Challenges” (2024) 15(3) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 246.

13 R Griffin, “Governing Platforms through Corporate Risk Management: The Politics of Systemic Risk in the Digital Services Act” (2025) European Law Open 1; M Loi, M Fabbri and A Ferrario “Regulating the Undefined: Addressing Systemic Risks in the Digital Services Act (with an Appendix on the AI Act)” (2025) 38(2) Philosophy and Technology 81.

14 Ibid.

15 A Palumbo and C Ducuing, “The Blurring of the Public-Private Dichotomy in Risk-based EU Digital Regulation: Challenges for the Rule of Law” (Law in the Age of Transitions: Public Interests and Private Powers, Groningen, 3-4 April 2025).

16 See Art. 3(i) of the DSA.

17 See Art. 3(j) of the DSA.

18 See Art. 33 of the DSA.

19 See Art. 37 of the DSA.

20 See Art. 56(2) of the DSA.

21 N Zingales, “The DSA as a Paradigm Shift for Online Intermediaries’ Due Diligence: Hail to Meta-Regulation”, in J van Hoboken, J Pedro Quintais, N Appelman, R Fahy, I Buri and M Straub (eds), “Putting the Digital Services Act Into Practice: Enforcement, Access to Justice, and Global Implications” (2023) Amsterdam Law School Research Paper No. 13, 2023, Institute for Information Law Research Paper No. 03, 2023.

22 Martin Husovec, Principles of the Digital Services Act (online edition, Oxford Academic, 2024).

23 See Art. 3(63) of the AI Act.

24 See Art. 51 of the AI Act.

25 See Art. 3(64) of the AI Act.

26 See Art. 51 and Annex XIII of the AI Act. See also: European Commission, Annex to the Communication to the Commission Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act) [2025] C(2025) 5045 final.

27 See Art. 88 of the AI Act.

28 For a more detailed overview of differences and similarities of systemic risk management regimes under the DSA and the AI Act, see: A Palumbo, “Charting Systemic Risk Management as a Regulatory Paradigm in EU Digital Legislation: Features, Challenges and Directions for Future Research” (2025) Technology and Regulation (forthcoming).

29 See the legislative definition of systemic risk provided in Article 3(65) of the AI Act, and the non-exhaustive list of systemic risks set out in Article 34 of the DSA.

30 See recitals 76–83 of the DSA, and recitals 110–115 of the AI Act.

31 Case T-486/24 R NKL Associates v Commission [2025], para 111. In this paragraph, the CJEU stated that the EU legislature decided to apply the enhanced due diligence obligations for VLOPSEs before the general entry into application of the DSA “in the light of the systemic societal risks associated with those types of services, including online content providers such as the online platform XNXX, and their potential impact on a significant proportion of the European Union’s population.”

32 See Art. 63 of the DSA.

33 See Art. 35(2) of the DSA.

34 See Art. 88 of the AI Act.

35 See Art. 90 of the AI Act.

36 See Art. 68 of the AI Act.

37 See Art. 66 of the AI Act.

38 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC [2013] OJ L 176/338; Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit institutions and investment firms and amending Regulation (EU) No 648/2012 [2013] OJ L176/1.

39 Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC [2023] OJ L135/1.

40 Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/94 as well as Council Directive 76/769/EEC and Commission Directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC [2006] OJ L396/1.

41 Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector and amending Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014, (EU) No 909/2014 and (EU) 2016/1011 [2022] OJ L333/1.

42 Directive (EU) 2024/1760 of the European Parliament and of the Council of 13 June 2024 on corporate sustainability due diligence and amending Directive (EU) 2019/1937 and Regulation (EU) 2023/2859 [2024] OJ L2024/1760.

43 Directive 2011/92/EU requires consideration of cultural heritage, which may be seen as a politically contentious and qualitative public value that is difficult to quantify. See Art. 3 of Directive 2011/92/EU of the European Parliament and of the Council of 13 December 2011 on the assessment of the effects of certain public and private projects on the environment (codification) [2012] OJ L 26/1.

44 European Commission, Commission Guidelines for providers of Very Large Online Platforms and Very Large Online Search Engines on the mitigation of systemic risks for electoral processes pursuant to Art. 35(3) of Regulation (EU) 2022/2065 [2024] OJ C/2024/3014.

45 European Commission, Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act) [2025] C(2025) 5045 final.

46 The only Section of the guidelines that may be relevant for the assessment and mitigation of systemic risks is Section 2.2, which provides for clarifications regarding what the lifecycle of a GPAIM is and how systemic risks must be assessed and mitigated iteratively during the lifecycle of the model.

47 Code of Practice for General-Purpose AI Models, available at: <https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai> (last accessed 8 September 2025).

48 See Art. 113 of the AI Act.

49 Case T-486/24 R NKL Associates v Commission [2025], para 111.

50 An overview of the reports is accessible at the following file managed by Alexander Hohlfeld: <https://docs.google.com/spreadsheets/d/12hJWpCFmHJMQQlz1qkd6OgGsMW82YcsWgJHXD7BHVps/edit?gid=0#gid=0> (last accessed 9 October 2025).

51 R Schütze, European Constitutional Law (2nd edn, Cambridge University Press, 2016) 302–4.

52 Case C-270/12 UK v Council and Parliament [2014] ECLI:EU:C:2014:18, paras 64–5.

53 Case C-425/13 Commission v Council [2015] ECLI:EU:C:2015:483, para 69; Case C-73/14 Council v Commission [2015] ECLI:EU:C:2015:663, para 61; Case C-687/15 Commission v Council [2017] ECLI:EU:C:2017:803, para 40.

54 M Chamon, The European Parliament and Delegated Legislation: An Institutional Balance Perspective (Hart Publishing, 2022) 1–11.

55 M Krajewski, Relative Authority of Judicial and Extra-Judicial Review (Hart Publishing, 2021) 41.

56 Case C-133/06 Parliament v Council [2008] ECLI:EU:C:2008:257, para 57.

57 A M Eklund, “Limits to Discretion and Automated Risk Assessments in EU Border Control: Recognising the Political in the Technical” (2024) 30(1–2) European Law Journal 103.

58 M Simoncini, Administrative Regulation beyond the Non-delegation Doctrine: A Study on EU Agencies (Hart Publishing, 2018) 14.

59 Case 9/56 Meroni v High Authority [1958] ECR 133, para 152.

60 R Caranta, “On Discretion”, in S Prechal and B van Roermund (eds.), The Coherence of EU Law (1st edn, Oxford University Press, 2008), 185.

61 Case C-270/12 UK v Council and Parliament [2014] ECLI:EU:C:2014:18.

62 P Van Cleynenbreugel, “Meroni Circumvented? Art. 114 TFEU and EU Regulatory Agencies” (2014) 21(1) Maastricht Journal of European and Comparative Law, 64–88.

63 J Mendes, “Bounded Discretion in EU Law: A Limited Judicial Paradigm in a Changing EU” (2017) 80(3) The Modern Law Review 460.

64 M Chamon, EU Agencies: Legal and Political Limits to the Transformation of the EU Administration (Oxford University Press, 2016) 176; M Chamon and N de Arriba-Sellier, “FBF: On the Justiciability of Soft Law and Broadening the Discretion of EU Agencies” (2022) 18(2) European Constitutional Law Review 301.

65 Case 9/56 Meroni v High Authority [1958] ECR 133, para 152.

66 Ibid.

67 Case C-270/12 UK v Council and Parliament [2014] ECLI:EU:C:2014:18, paras 41, 53.

68 Ibid, para 39.

69 The relevant case-law also includes judgments dating from before the Lisbon Treaty, that nonetheless address the same legal question on the essential elements of a legislative act. See: Case 25/70 Einfuhrund Vorratsstelle für Getreide und Futtermittel v Köster and Berodt & Co [1970] ECR 1161; Case C-240/90 Germany v Commission [1992] ECLI:EU:C:1992:408.

70 Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, paras 65, 76, 78.

71 M Chamon, “How the Concept of Essential Elements of a Legislative Act Continues to Elude the Court” (2013) 50(3) Common Market Law Review, 849.

72 Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, para 76.

73 Ibid; E Tauschinsky and M den Heijer, “Where Human Rights Meet Administrative Law: Essential Elements and Limits to Delegation: European Court of Justice, Grand Chamber C-355/10: European Parliament v. Council of the European Union” (2013) 3(9) European Constitutional Law Review 513, 526.

74 Case C-44/16 Dyson Ltd v European Commission [2017] ECLI:EU:C:2017:357, para 64; Joined Cases T-339/16, T-352/16 and T-391/16 Ville de Paris, Ville de Bruxelles, Ayuntamiento de Madrid v European Commission [2018] ECLI:EU:T:2018:927, para 118.

75 Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, para 76; Case T-781/22 Madre Querida, SL v European Commission [2025] ECLI:EU:T:2025:591, paras 112–27.

76 Opinion of the AG in Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, para 64.

77 Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, para 76; Case C-363/14 Parliament v Council [2015] ECLI:EU:C:2015:579, paras 52–55; supra, note 73.

78 Ibid.

79 Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, para 77.

80 Ibid, para 76.

81 On the meaning of essence of fundamental rights, see: K Lenaerts, “Limits on Limitations: The Essence of Fundamental Rights in the EU” (2019) 20(6) German Law Journal 779.

82 Supra, note 73; P Craig, The Lisbon Treaty: Law, Politics and Treaty Reform (Oxford University Press 2010).

83 K Lenaerts and A Verhoeven, “Towards a Legal Framework for Executive Rule-Making in the EU? The Contribution of the New Comitology Decision” (2000) 37 Common Market Law Review 645, 662; G Bellenghi and E Vos, “Rethinking the Constitutional Architecture of EU Executive Rulemaking: Treaty Change and Enhanced Democracy” (2024) 15(4) European Journal of Risk Regulation 793, 798.

84 Art. 19, “ARTICLE 19 recommendations for the Digital Services Act Trilogue” (Art. 19 website 2022) <https://www.article19.org/wp-content/uploads/2022/02/A19-recommendations-for-the-DSA-Trilogue.pdf> (last accessed 20 March 2025), 2–3; J Barata, “The Digital Services Act and Its Impact on the Right to Freedom of Expression: Special Focus on Risk Mitigation Obligations” (2021) publication on Plataforma en Defensa de la Libertad de Información (PDLI), 19–21; M C de Carvalho, “It will be What We want it to be: Sociotechnical and Contested Systemic Risk at the Core of the EU’s Regulation of Platforms’ AI Systems” (2025) 16(1) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 35; N Eder “Making Systemic Risk Assessments Work: How the DSA Creates a Virtuous Loop to Address the Societal Harms of Content Moderation” (2024) German Law Journal 1.

85 M C de Carvalho, “It will be What We want it to be: Sociotechnical and Contested Systemic Risk at the Core of the EU’s Regulation of Platforms’ AI Systems” (2025) 16(1) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 35.

86 Case C -401/19 Republic of Poland v Parliament and Council [2022] ECLI:EU:C:2022:297, para 74.

87 Ibid, para 75; Case C-314/12 UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH [2014] ECLI:EU:C:2014:192, para 52.

88 J Black and R Baldwin, “Really Responsive Risk-based Regulation” (2010) 32(2) Law & Policy 181, 183–5; R Paul, “Harmonisation by Risk Analysis? Frontex and the Risk-based Governance of European Border Control’ (2017) 39(6) Journal of European Integration 689.

89 J Black and R Baldwin, “When Risk-based Regulation Aims Low: Approaches and Challenges” (2012) 6(1) Regulation & Governance 2.

90 Supra, note 13.

91 K Kaesling and A Wolf, “Sustainability and Risk Management under the Digital Services Act: A Touchstone for the Interpretation of ‘Systemic Risks’” (2025) 74(2) GRUR International 119, 122.

92 Supra, note 13.

93 N Eder, “Making Systemic Risk Assessments Work: How the DSA Creates a Virtuous Loop to Address the Societal Harms of Content Moderation” (2024) German Law Journal 1.

94 M Loi, M Fabbri and A Ferrario, “Regulating the Undefined: Addressing Systemic Risks in the Digital Services Act (with an Appendix on the AI Act)” (2025) 38(2) Philosophy and Technology 81.

95 R Gellert, The Risk-Based Approach to Data Protection (Oxford University Press 2020), 28; supra, note 4.

96 See Section III.2 above.

97 See Section III.3 above.

98 Charter of Fundamental Rights of the European Union [2012] OJ C 364/01.

99 See recitals 9, 83, 84 and 104 of the DSA.

100 See Art. 35(1)(c) and (d), and recitals 87 and 88 of the DSA.

101 See recitals 87 and 88 of the DSA.

102 See Measure 18.2 of the Code of Conduct on Disinformation, 2025, Code_of_Conduct_on_Disinformation_f9bhfVbrSm6IEbiMmtGRVsLHZKA_112678.pdf (last access 7 July 2025).

103 Supra, note 12.

104 M Husovec, Principles of the Digital Services Act (online edition, Oxford Academic, 2024); M Husovec, “The Digital Services Act’s Red Line: What the Commission Can and Cannot Do About Disinformation’ (2024) 16(1) Journal of Media Law 47.

105 This paper assumes that, in certain cases, content qualifying as disinformation may be legal. This has been a common assumption in policy discourse in the area of platform regulation for a long time, but it is appropriate to note that certain instances of disinformation are also illegal in many EU member states. See: European Regulators Group for Audiovisual Media Services, ‘Notions of disinformation and related concepts’ (2020), ERGA report, available at: <ERGA-SG2-Report-2020-Notions-of-disinformation-and-related-concepts-final.pdf> (last accessed 15 September 2025).

This point is also discussed in: R Ó Fathaigh, D Buijs and J van Hoboken, “The Regulation of Disinformation Under the Digital Services Act” (2025) 13 Media and Communication, Art. 9615.

106 In relation to disinformation and misinformation see, among others, Sections 6.2.2.8 and 6.2.2.14 of the Meta report on systemic risk assessment and mitigation for Facebook, published in August 2024, EU DSA SRA Report 2024_Facebook_Meta, (last accessed 9 September 2025).

107 B Botero Arcila, “Is it a Platform? Is it a Search Engine? It’s Chat GPT! The European Liability Regime for Large Language Models” (2023) 3(2) Journal of Free Speech Law.

108 For instance, Gemini is an LLM that has been integrated into Google’s search engine. For more information, see: <https://blog.google/products/search/generative-ai-google-search-may-2024/> (last accessed on 27 June 2025).

109 H M Whitney and R M Simpson, “Search Engines and Free Speech Coverage” in S J Brison and K Gelber (eds), Free Speech in the Digital Age (Oxford University Press 2019); J van Hoboken, Search Engine Freedom: On the Implications of the Right to Freedom of Expression for the Legal Governance of Web Search Engines (Kluwer Law International 2012).

110 Committee of Ministers, Recommendation of the Committee of Ministers to member States on the protection of human rights with regard to search engines (adopted on 4 April 2012 at the 1139th meeting of the Ministers’ Deputies) CM/Rec(2012)3.

111 G De Gregorio, “The Normative Power of Artificial Intelligence” (2023) 30(2) Indiana Journal of Global Legal Studies 55.

112 See Section III.3.

113 Case C-355/10 European Parliament v Council [2012] ECLI:EU:C:2012:516, para 76.

114 Ibid, para 77.

115 For a comprehensive description of chilling effect as a concept in the case-law of the ECtHR, see: T Baumbach, “Chilling Effect as a European Court of Human Rights’ Concept in Media Law Cases” (2018) 6(1) Bergen Journal of Criminal Law & Criminal Justice 92.

116 See Art. 35(1) of the DSA.

117 R Paul, “European Artificial Intelligence “Trusted Throughout the World”: Risk-based Regulation and the Fashioning of a Competitive Common AI Market” (2024) 18 Regulation & Governance 1065.

118 Supra, note 2.

119 M Senftleben, J Pedro Quintais and A Meiring, “How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization” (2023) 38 Berkeley Technology Law Journal 933.

120 Supra, note 13.

121 B Wynne, “Risk and Environment as Legitimatory Discourses of Technology: Reflexivity Inside Out?” (2002) 50(3) Current Sociology 459, 461; Supra, note 13.

122 Discourse theory scholars have studied for a long time the relationship between discourse, knowledge and power. See S Miller, “Foucault on Discourse and Power” (1990) 76 Theoria: A Journal of Social and Political Theory 115. There are also contributions on the discursive framing of risk, see: C Hardy and S Maguire, “Organizing risk: discourse, power, and ‘riskification’” (2016) 41(1) The Academy of Management Review, 80; B Wynne, “Risk as Globalising ‘Democratic’ Discourse? Framing Subjects and Citizens” in M Leach, I Scoones and B Wynne (eds) Science and Citizens: Globalization and the Challenge of Engagement (Zed Books, 2005); D Shaw and J Scully, “The Foundations of Influencing Policy and Practice: How Risk Science Discourse Shaped Government Action During COVID -19” (2024) 44(12) Risk Analysis 2889.

123 H Rothstein, M Huber and G Gaskell, “A Theory of Risk Colonization: The Spiralling Regulatory Logics of Societal and Institutional Risk” (2006) 35 Economy and Society 91; T Lemke, “‘The Birth of Bio-Politics’: Michel Foucault’s Lecture at the Collège de France on Neo-Liberal Governmentality” (2001) 30 Economy and Society 190; P O’Malley, “Uncertain Subjects: Risks, Liberalism and Contract” (2000) 29 Economy and Society 460.

124 J Braithwaite, “Meta-Risk Management and Responsive Regulation for Tax System Integrity” (2003) 25(1) Law and Policy 1; J Black, “The Emergence of Risk-Based Regulation and the New Public Risk Management in the United Kingdom” (2005) Public Law 510, 510; supra, note 21; A Chander, “When the Digital Services Act Goes Global” (2023) 38(3) Berkeley Technology Law Journal 1067.

125 L Enriques and D Zetzsche, “The Risky Business of Regulating Risk Management in Listed Companies” (2013) 3 European Company and Financial Law Review 271.

126 Supra, note 12; M Fasel and S Weerts, “Between Regulation, Pressure and Collaboration: The Public–Private Entanglement in Content Moderation” (2025) Telecommunications Policy.

127 The guidelines on the scope of the provisions for GPAIMs emphasizes the importance of informal collaboration and dialogue for the supervision of providers of GPAIMs by the AI Office. See paragraph 102 of European Commission, Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act) [2025] C(2025) 5045 final.

128 Including exerting ultra vires pressure with so-called “jawboning” practices. An example is the letter sent by former Commissioner Thierry Breton to X owner Elon Musk about the dissemination of harmful content on X. See: Thierry Breton on X: “With great audience comes greater responsibility #DSA As there is a risk of amplification of potentially harmful content in EU in connection with events with major audience around the world, I sent this letter to @elonmusk https://t.co/P1IgxdPLzn”X”.

129 M D Birnhack and N Elkin-Koren, “The Invisible Handshake: The Reemergence of the State in the Digital Environment” (2003) 8(6) Virginia Journal of Law and Technology; D E Bambauer, “Against Jawboning” (2015) Minnesota Law Review 182; C T Marsden, Internet Co-Regulation: European Law, Regulatory Governance and Legitimacy in Cyberspace (Cambridge University Press, 2011); C T Marsden, T Meyer and I Brown, “Platform Values and Democratic Elections: How can the Law Regulate Digital Disinformation?” (2020) 36 Computer Law & Security Review 105373; D Keller, “Who Do You Sue? State and Platform Hybrid Power Over Online Speech” (2019) Hoover Institution, Aegis Series Paper No. 1902; P Leerssen, “Cut Out by the Middle Man: The Free Speech Implications of Social Network Blocking and Banning in The EU” (2015) 6(2) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 99.

130 M Chamon, “The Institutional Balance, an Ill-Fated Principle of EU Law?” (2015) 21(2) European Public Law 371.

131 Supra, note 13.