This article critically examines the concept of systemic risk as used in the EU Act in relation to General-Purpose AI Models (GPAIMs). It argues that rather than resolving uncertainty, the Act institutionalises it, transforming systemic risk into a flexible yet uncertain legal category. Drawing on legal theory and sociological perspectives – especially systems theory – this paper shows how systemic risk functions less as a concrete threshold for intervention and more as a proxy epistemic uncertainty surrounding GPAIMs. In the analysis, three interrelated consequences are identified: the institutionalisation of regulatory and scientific uncertainty, the delegation of key decisions about the content of systemic risk to private actors, and a regulatory blind spot created by the Act’s conceptual distinction between AI models and AI systems. These developments risk undermining the Act’s goal of legal certainty, exposing the paradox of AI governance: that in attempting to mitigate unknown future risks, the law may instead reproduce regulatory uncertainty.