Hostname: page-component-5b777bbd6c-vfh8q Total loading time: 0 Render date: 2025-06-25T09:06:11.326Z Has data issue: false hasContentIssue false

Systemic contributions to global catastrophic risk

Published online by Cambridge University Press:  23 June 2025

Constantin W. Arnscheidt*
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
S. J. Beard
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Tom Hobson
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Paul Ingram
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Luke Kemp
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK Notre Dame Institute for Advanced Study, University of Notre Dame, Notre Dame, IN, USA
Lara Mani
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Alexandru Marcoci
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Kennedy Mbeva
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Seán S. Ó hÉigeartaigh
Affiliation:
Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
Anders Sandberg
Affiliation:
Institute for Futures Studies, Stockholm, Sweden
Lalitha S. Sundaram
Affiliation:
Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
Nico Wunderling
Affiliation:
Center for Critical Computational Studies (C3S), Goethe University Frankfurt, Frankfurt am Main, Germany Earth Resilience Science Unit, Potsdam Institute for Climate Impact Research (PIK), Member of the Leibniz Association, Potsdam, Germany High Meadows Environmental Institute, Princeton University, Princeton, NJ, USA
*
Corresponding author: Constantin W. Arnscheidt; Email: ca628@cam.ac.uk

Abstract

Non-technical summary

We live in a time of significant global risk. Some research has focused on understanding systemic sources of this risk, while other research has focused on possible worst-case outcomes. In this article, we bring together these two areas of research and provide a simple conceptual framework that shows how emergent features of the global system contribute to the risk of global catastrophe.

Technical summary

Humanity faces a complex and dangerous global risk landscape, and many different terms and concepts have been used to make sense of it. One broad strand of research characterises how risk emerges within the complex global system, using concepts like systemic risk, Anthropocene risk, synchronous failure, negative social tipping points, and polycrisis. Another focuses on possible worst-case outcomes, using concepts like global catastrophic risk (GCR), existential risk, and extinction risk. Despite their clear relevance to each other, connections between these two strands remain limited. Here, we provide a simple conceptual framework that synthesises these research strands and shows how emergent properties of the global system contribute to the risk of global catastrophic outcomes. In particular, we show that much of GCR stems from the interaction of hazards and vulnerabilities that arise endogenously within the global system, and how ‘systems thinking’ and complex adaptive systems theory can help illuminate this. We also highlight some unique challenges that systemic sources of GCR pose for risk assessment and mitigation, discuss insights for policy, and outline potential paths forward.

Social media summary

The global system is generating global catastrophic risk.

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction

Understanding and reducing global catastrophic risk (GCR) is vital. We define GCR as the riskFootnote 1 of a catastrophic loss of life and well-being on a global scale, with the death of 10% or more of the current human population (Cotton-Barratt et al., Reference Cotton-Barratt, Farquhar, Halstead, Schubert and Snyder-Beattie2016; Kemp et al., Reference Kemp, Xu, Depledge, Ebi, Gibbins, Kohler, Rockström, Scheffer, Schellnhuber, Steffen and Lenton2022) as a useful non-prescriptive anchoring point in terms of magnitude. Subcategories of GCR include the risk of human extinction (extinction risk), the risk of global societal collapse (collapse risk), and the risk of a wider set of catastrophes judged to be of a similar magnitude as human extinction (existential risk);Footnote 2 meanwhile, GCR is itself a subcategory of ‘global risk’ more broadly. There is a large body of work on GCR,Footnote 3 focusing for example on asteroid impacts (Baum, Reference Baum2023; Mani et al., Reference Mani, Erwin, Johnson, Beard, Rees, Richards and Rojas2023), large-magnitude volcanic eruptions (Cassidy & Mani, Reference Cassidy and Mani2022), anthropogenic climate change (Beard et al., Reference Beard, Holt, Tzachor, Kemp, Avin, Torres and Belfield2021; Kemp et al., Reference Kemp, Xu, Depledge, Ebi, Gibbins, Kohler, Rockström, Scheffer, Schellnhuber, Steffen and Lenton2022), biological threats (Millett & Snyder-Beattie, Reference Millett and Snyder-Beattie2017; Musunuri et al., Reference Musunuri, Sandbrink, Monrad, Palmer and Koblentz2021; Schoch-Spana et al., Reference Schoch-Spana, Cicero, Adalja, Gronvall, Kirk Sell, Meyer, Nuzzo, Ravi, Shearer, Toner, Watson, Watson and Inglesby2017), nuclear war (Baum & Barrett, Reference Baum and Barrett2018; Robock, Reference Robock2010; Sagan, Reference Sagan1983; Scouras, Reference Scouras2019), and advanced artificial intelligence (AI; Bengio et al., Reference Bengio, Mindermann, Privitera, Besiroglu, Bommasani, Casper, Choi, Fox, Garfinkel, Goldfarb, Heidari, Ho, Kapoor, Khalatbari, Longpre, Manning, Mavroudis, Mazeika, Michael and Zeng2025; Gruetzemacher & Whittlestone, Reference Gruetzemacher and Whittlestone2022; Hendrycks et al., Reference Hendrycks, Mazeika and Woodside2023; Russell, Reference Russell2019).

While these drivers are often studied in isolation, they arise as part of a complex interconnected global risk landscape. Effectively, human activities have coalesced into a vast global system involving the worldwide exchange of goods, people, information, and ideas (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Ellis, Reference Ellis2015; Goldin & Mariathasan, Reference Goldin and Mariathasan2014; Helbing, Reference Helbing2013), which relies on underlying connective infrastructure (physical, digital, cultural, and economic). This brings with it many benefits, but also new global hazards, vulnerabilities, and possible undesirable outcomes. Many terms and concepts have been used to describe aspects of this phenomenon, including (global) systemic risk (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Renn et al., Reference Renn, Lucas, Haas and Jaeger2017; Sillmann et al., Reference Sillmann, Christensen, Hochrainer-Stigler, Huang-Lachmann, Juhola, Kornhuber, Mahecha, Mechler, Reichstein, Ruane, Schweizer and Williams2022), compound risk (Kruczkiewicz et al., Reference Kruczkiewicz, Klopp, Fisher, Mason, McClain, Sheekh, Moss, Parks and Braneon2021), synchronous failure (Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015), Anthropocene risk (Keys et al., Reference Keys, Galaz, Dyer, Matthews, Folke, Nyström and Cornell2019), femtorisk (Frank et al., Reference Frank, Collins, Levin, Lo, Ramo, Dieckmann, Kremenyuk, Kryazhimskiy, Linnerooth-Bayer, Ramalingam, Roy, Saari, Thurner and von Winterfeldt2014), hyper-risk (Helbing, Reference Helbing2013), negative social tipping points (Juhola et al., Reference Juhola, Filatova, Hochrainer-Stigler, Mechler, Scheffran and Schweizer2022; Spaiser et al., Reference Spaiser, Juhola, Constantino, Guo, Watson, Sillmann, Craparo, Basel, Bruun, Krishnamurthy, Scheffran, Pinho, Okpara, Donges, Bhowmik, Yasseri, Safra de Campos, Cumming, Chenet and Spears2024), and polycrisis (Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024; Tooze, Reference Tooze2021). Such work typically focuses on catastrophes of a smaller scale than those considered in the study of GCR.

In this article, we bring together these two strands of work – on GCR and worst-case outcomes, and on how risk emerges within the complex global system. To sharpen our focus slightly, we ground our understanding of the latter strand of research in ‘systems thinking’ and the theory of complex adaptive systems (CASs), which much of the work cited in the previous paragraph explicitly or implicitly draws from. For convenience, we will refer to this work as ‘the work on emergent global risk’. This does not mean that we limit our view exclusively to systems thinking and CAS perspectives; rather, we use them as framing devices to help draw out general insights and cut an already challenging task (synthesising work on GCR with other work on global risk and the complex global system) down to a slightly more manageable size.

We proceed as follows. First, we introduce key ideas from systems thinking and CAS theory. Next, we define the ‘global system’, discuss why the concept is useful, and review existing work on emergent global risk. Then, we provide a simple conceptual framework for understanding what we term systemic contributions to global catastrophic risk. Its aim is not only to help us better understand GCR but also to facilitate connections between the existing work on GCR and that on emergent global risk. Beyond this framework, we also highlight some unique challenges that systemic sources of GCR pose for risk assessment and mitigation. We conclude by discussing the strengths and weaknesses of our approach, as well as potential paths forward.

2. A systemic understanding of global risk

2.1. What is ‘systems thinking’ and why does it matter for global risk?

‘Systems thinking’ recognises that a system is more than the sum of its parts (Meadows, Reference Meadows2008): its behaviour is emergent and cannot be predicted purely from its constituent components.Footnote 4 More specifically, CAS theory describes systems in which patterns and adaptive behaviours at higher levels emerge from localised interactions and selection at lower levels (Holland, Reference Holland1995; Levin, Reference Levin1998; Levin et al., Reference Levin, Xepapadeas, Crépin, Norberg, De Zeeuw, Folke, Hughes, Arrow, Barrett, Daily, Ehrlich, Kautsky, Mäler, Polasky, Troell, Vincent and Walker2013; Miller & Page, Reference Miller and Page2009). Examples of CASs include ecosystems (Levin, Reference Levin1998), social systems (Miller & Page, Reference Miller and Page2009), economies (Arthur, Reference Arthur2021; Levin et al., Reference Levin, Xepapadeas, Crépin, Norberg, De Zeeuw, Folke, Hughes, Arrow, Barrett, Daily, Ehrlich, Kautsky, Mäler, Polasky, Troell, Vincent and Walker2013), food systems (Chapman et al., Reference Chapman, Klassen, Kreitzman, Semmelink, Sharp, Singh and Chan2017), infrastructure systems (Oughton et al., Reference Oughton, Usher, Tyler and Hall2018), social–ecological systems more broadly (Levin et al., Reference Levin, Xepapadeas, Crépin, Norberg, De Zeeuw, Folke, Hughes, Arrow, Barrett, Daily, Ehrlich, Kautsky, Mäler, Polasky, Troell, Vincent and Walker2013; Preiser et al., Reference Preiser, Biggs, De Vos and Folke2018), and the Earth's entire biosphere (Folke et al., Reference Folke, Polasky, Rockström, Galaz, Westley, Lamont, Scheffer, Österblom, Carpenter, Chapin, Seto, Weber, Crona, Daily, Dasgupta, Gaffney, Gordon, Hoff, Levin and Walker2021; Levin, Reference Levin1998). CASs can behave in highly non-linear ways: large changes can have small effects, small changes can have large effects, and disruptions in one part of the system can cascade into other parts. Broadly, systems thinking allows us to make sense of such behaviours; this matters for global risk and GCR because many of the systems that humans exist within and depend on (Avin et al., Reference Avin, Wintle, Weitzdörfer, Ó hÉigeartaigh, Sutherland and Rees2018) are CASs (see also Section 2.2).

Our usage of ‘complex’ here is not synonymous with ‘complicated’ (Kreienkamp & Pegram, Reference Kreienkamp and Pegram2020; Miller & Page, Reference Miller and Page2009; Snowden & Boone, Reference Snowden and Boone2007; UNDRR, 2019). A system can be complicated but not complex: this is the case if it is intricate and difficult to understand, but still largely comprehensible based on its component parts and predictable relationships between them (i.e. emergent system properties do not play an important role). A useful example of such a system is a jet engine (Kreienkamp & Pegram, Reference Kreienkamp and Pegram2020).

Much emergent behaviour in systems can be described using the language of feedbacks. Feedbacks occur when the state of a system affects (feeds back on) the state of that same system, either directly or indirectly. Mathematically, positive feedbacks (+) are destabilising (a given change causes more of that change to occur), while negative feedbacks (−) are stabilising (the effects of a given change can be damped out). This is summarised in Figure 1, using ‘causal loop’ and ‘stability landscape’ diagrams. Both kinds of feedbacks can have desirable and undesirable effects: stabilising feedbacks can preserve desirable system states but also trap systems in undesirable states, while destabilising feedbacks can drive runaway evolution to desirable or undesirable system states. Most systems of interest contain multiple feedbacks, which can change in strength depending on external or internal factors and can lead to sudden shifts in system behaviour (e.g. when an equilibrium state changes stability).

Figure 1. Summary of destabilising (mathematically positive) and stabilising (mathematically negative) feedbacks. In causal loop diagrams (top row), an arrow with a + symbol means that a change in the first variable causes the second variable to change in the same direction (e.g. an increase in A causes an increase in B), and an arrow with a − symbol means that a change in the first variable causes the second variable to change in the opposite direction (e.g. an increase in A causes a decrease in B). In stability landscape diagrams (bottom row), the state of the system is conceived of as a ball rolling on a landscape (collapsing the high-dimensional state spaces of the real world onto a single dimension). When destabilising feedbacks dominate, we see runaway change (rolling down the hill); when stabilising feedbacks dominate, the system remains within a stable equilibrium (the valley, or ‘basin of attraction’).

Another valuable concept for understanding CAS dynamics is that of resilience. While early work characterised resilience simply as the size of the system's current basin of attraction (Holling, Reference Holling1973; Scheffer et al., Reference Scheffer, Carpenter, Foley, Folke and Walker2001), Walker et al. (Reference Walker, Holling, Carpenter and Kinzig2004) define it more qualitatively as ‘the capacity of a system to absorb disturbance and reorganise while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks’. Resilience is deeply related to adaptability (the capacity of a system to adjust its responses based on changing conditions) and transformability (the capacity to transform the stability landscape to become a new kind of system); critically, maintaining system identity at large scales under changing conditions may require radical system transformations at smaller scales (Folke et al., Reference Folke, Carpenter, Walker, Scheffer, Chapin and Rockström2010).

This understanding of resilience as a dynamic property involving transformation and reorganisation naturally leads us to consider how such changes unfold over time. One theoretical framework that explicitly addresses these temporal dynamics is adaptive cycle theory (Gunderson & Holling, Reference Gunderson and Holling2002). Building on examples from ecology, this highlights that many systems undergo a cycle of: exploitation, in which there is rapid expansion of agents or strategies into new areas; conservation, in which resources slowly accumulate and the system becomes more interconnected and rigid; release, in which the rigid interconnected system fails and collapses; and reorganisation, in which the system is restructured and new kinds of agents or strategies arise. In practice, systems may exhibit a ‘panarchy’ of nested adaptive cycles on a range of scales (Allen et al., Reference Allen, Angeler, Garmestani, Gunderson and Holling2014; Gunderson et al., Reference Gunderson, Allen and Garmestani2022; Gunderson & Holling, Reference Gunderson and Holling2002).

2.2. The global system

To see how systems thinking can inform our understanding of global risk, it is worth developing a notion of ‘the global system’. By the ‘global system’, we mean the globally interconnected system of human economic, social, political, and cultural relations, including humans themselves, material flows, and the extraction of materials from the broader Earth system. Our conceptualisation draws from both world-systems theory in the social sciences (Chase-Dunn & Grimes, Reference Chase-Dunn and Grimes1995) and from Earth system science, where the global system has been labelled the ‘Anthroposphere’ (Steffen et al., Reference Steffen, Richardson, Rockström, Schellnhuber, Dube, Dutreuil, Lenton and Lubchenco2020). While the global system is a subsystem of the Earth system, the boundaries are fuzzy: the two systems have substantially shaped one another (Ellis, Reference Ellis2015; Frankopan, Reference Frankopan2023; Nyström et al., Reference Nyström, Jouffray, Norström, Crona, Søgaard Jørgensen, Carpenter, Bodin, Galaz and Folke2019; Williams et al., Reference Williams, Zalasiewicz, Haff, Schwägerl, Barnosky and Ellis2015). The global system is a CAS, and so are many of its subsystems.

Why is this concept useful? For our purposes, there are two main reasons. First, by identifying a single global system within which most humans are embedded and upon which most humans rely, we can recognise that this system mediates almost all GCR.Footnote 5 This is an important insight. Early work on GCR often treated risk as essentially synonymous with hazard: the risk of an asteroid impact or the risk of dangerous climate change. But this neglects the important role played by vulnerability: how will the global system respond when stressed by a hazard? Foregrounding this can drastically change our understanding of GCR.

Second, identifying the global system as a CAS informs us that it will display emergent phenomena, such as non-linear behaviour, path dependence, feedbacks, and cascading failures. We need to understand these phenomena to understand and mitigate global risk.

We will occasionally find it convenient to refer to the global system as a single entity, which, for example, ‘creates’, ‘generates’, or ‘amplifies’ various drivers of risk. This is meant neither to anthropomorphise the global system nor to suggest that all parts of the global system are equally responsible for the phenomenon being discussed. It is also not intended to remove culpability or agency from individual actors, such as people, states, or corporations. As one illustrative example, in the case of climate change, it is simultaneously true that a small number of companies conduct the extraction of fossil fuel leading to the vast majority of carbon dioxide emissions (Carbon Majors, 2024; Heede, Reference Heede2014), that these companies exist within a global system within which it is highly profitable to extract and sell fossil fuels, and that the demand for the energy produced ultimately derives from the constrained choices of billions of individual human beings.

2.3. Emergent global risk: a wide array of concepts

In this section, we review some of the many concepts used to describe how risk emerges from the complex global system. A natural starting point is with ‘systemic risk’. Early definitions of the term (Kaufman & Scott, Reference Kaufman and Scott2003; OECD, 2003) emphasised the risk of failure in an entire system, as opposed to failures in some of its components. Subsequent work has emphasised the risk of smaller disruptions being amplified, for example, due to non-linearity, interconnectedness, and cascading failure (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; May et al., Reference May, Levin and Sugihara2008; Renn et al., Reference Renn, Lucas, Haas and Jaeger2017, Reference Renn, Laubichler, Lucas, Kröger, Schanze, Scholz and Schweizer2022; Sillmann et al., Reference Sillmann, Christensen, Hochrainer-Stigler, Huang-Lachmann, Juhola, Kornhuber, Mahecha, Mechler, Reichstein, Ruane, Schweizer and Williams2022). The concepts of ‘global systemic risk’ (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Renn et al., Reference Renn, Lucas, Haas and Jaeger2017) and ‘hyper-risks’ (Helbing, Reference Helbing2013) apply these ideas specifically to the global system. Even more specifically, the concept of ‘femtorisk’ focuses on the systemic risk due to ‘the actions and interactions of actors existing beneath the level of formal institutions, often operating outside effective governance structures’ (Frank et al., Reference Frank, Collins, Levin, Lo, Ramo, Dieckmann, Kremenyuk, Kryazhimskiy, Linnerooth-Bayer, Ramalingam, Roy, Saari, Thurner and von Winterfeldt2014).

There is a wider set of related and often overlapping concepts from risk analysis, including compound risk, interacting risk, interconnected risk, and cascading risk (Pescaroli & Alexander, Reference Pescaroli and Alexander2018). Although discussion of the nuances is beyond the scope of this section, we do note that the concept of compound risk – the risk of disasters involving multiple simultaneously occurring hazards (Pescaroli & Alexander, Reference Pescaroli and Alexander2018; Zscheischler et al., Reference Zscheischler, Westra, Van Den Hurk, Seneviratne, Ward, Pitman, AghaKouchak, Bresch, Leonard, Wahl and Zhang2018, Reference Zscheischler, Martius, Westra, Bevacqua, Raymond, Horton, van den Hurk, AghaKouchak, Jézéquel, Mahecha, Maraun, Ramos, Ridder, Thiery and Vignotto2020) – has recently been applied more broadly to global systems in light of the COVID-19 pandemic (Kruczkiewicz et al., Reference Kruczkiewicz, Klopp, Fisher, Mason, McClain, Sheekh, Moss, Parks and Braneon2021). If the co-occurring hazards cause more damage together than had they occurred separately, this is precisely an instance of non-linear amplification – a key characteristic of CASs.

The global system also interacts with the global environment to produce risk. Human actions are by far the dominant driver of global environmental change (Ellis, Reference Ellis2015) and could trigger a wide range of ecological and environmental tipping points (Armstrong McKay et al., Reference Armstrong McKay, Staal, Abrams, Winkelmann, Sakschewski, Loriani, Fetzer, Cornell, Rockström and Lenton2022; Barnosky et al., Reference Barnosky, Hadly, Bascompte, Berlow, Brown, Fortelius, Getz, Harte, Hastings, Marquet, Martinez, Mooers, Roopnarine, Vermeij, Williams, Gillespie, Kitzes, Marshall, Matzke and Smith2012; Lenton et al., Reference Lenton, Held, Kriegler, Hall, Lucht, Rahmstorf and Schellnhuber2008; Richardson et al., Reference Richardson, Steffen, Lucht, Bendtsen, Cornell, Donges, Drüke, Fetzer, Bala, von Bloh, Feulner, Fiedler, Gerten, Gleeson, Hofmann, Huiskamp, Kummu, Mohan, Nogués-Bravo and Rockström2023). One useful conceptual framework here is that of ‘Anthropocene risks’ (Keys et al., Reference Keys, Galaz, Dyer, Matthews, Folke, Nyström and Cornell2019), which are described as originating from anthropogenic Earth system change, emerging due to the evolution of globally intertwined social–ecological systems, and involving complex cross-scale interactions.

A separate and emerging body of work applies the concept of tipping points – where a small change can have a large, abrupt, self-perpetuating, and hard-to-reverse impact – to the context of negative social change (Juhola et al., Reference Juhola, Filatova, Hochrainer-Stigler, Mechler, Scheffran and Schweizer2022; Spaiser et al., Reference Spaiser, Juhola, Constantino, Guo, Watson, Sillmann, Craparo, Basel, Bruun, Krishnamurthy, Scheffran, Pinho, Okpara, Donges, Bhowmik, Yasseri, Safra de Campos, Cumming, Chenet and Spears2024).Footnote 6 Such ‘negative social tipping points’ are another way to understand the non-linear creation of risk in the global system. While the focus thus far has been on stresses from anthropogenic climate change, this does not need to remain the case: tipping points in the global system could be triggered by a wide range of factors.

The subject of negative social tipping points leads us straightforwardly to collapse. Societal collapse (Brozović, Reference Brozović2023; Centeno et al., Reference Centeno, Callahan, Larcey and Patterson2023; Tainter, Reference Tainter1988) likely involves interacting non-linear processes (e.g. tipping) within the system itself (Centeno et al., Reference Centeno, Callahan, Larcey, Patterson, Izdebski, Haldon and Filipkowski2022; Cumming & Peterson, Reference Cumming and Peterson2017; Homer-Dixon, Reference Homer-Dixon2006; Juhola et al., Reference Juhola, Filatova, Hochrainer-Stigler, Mechler, Scheffran and Schweizer2022; Lenton, Reference Lenton, Centeno, Callahan, Larcey and Patterson2023). There is a natural intersection between the study of collapse and the study of GCR and worst-case outcomes (Belfield, Reference Belfield, Centeno, Callahan, Larcey and Patterson2023). Global collapse is an important form of global catastrophe. A collapse is usually preceded by a crisis (Butzer, Reference Butzer2012).

The ‘synchronous failure’ framework (Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015) offers a more specific causal description of how crises emerge in the modern global system. The authors first identify three important global trends: the scale of human activity, increased connectivity, and reduced diversity. They then argue that these trends favour three specific ‘process archetypes’ – ‘long fuse big bang’, ‘simultaneous stresses’, and ‘ramifying cascade’ – which interact within and across systems to produce global crises. These archetypes are relevant not just for the modern global system, but have also been applied historically, for example to the Late Bronze Age Collapse (Kemp & Cline, Reference Kemp, Cline, Izdebski, Haldon and Filipkowski2022).

Finally, the concept of polycrisis captures the general idea that the world's crises are coinciding and converging into a whole worse than the sum of its parts (Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024; Morin & Kern, Reference Morin and Kern1999; Tooze, Reference Tooze2021). Lawrence et al. (Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024) offer a more specific definition: global polycrisis is the ‘causal entanglement of global crises in ways that significantly degrade humanity's prospects’. They describe this entanglement by distinguishing between stresses, triggers, and crises (Section 5.2), and considering how these interact within and across systems. In this article, we typically use ‘polycrisis’ to refer to this specific conceptual framework.

3. Systemic contributions to GCR: a conceptual framework

With these foundations in place, we now provide a conceptual frameworkFootnote 7 for understanding systemic contributions to GCR. In other words, how do emergent ‘systemic’ phenomena within the global system contribute to the creation of GCR? The purpose of the framework is twofold: to provide insight into how GCR is created (although it is not a complete causal model) and to facilitate connections between existing work on GCR and on emergent global risk.

We conceptualise GCR as created by the interaction of hazards with vulnerabilities (Figure 2). Here, we define hazards as the proximal events and processes which could lead to undesirable outcomes. They can come from outside of the global system (e.g. asteroid impacts) or emerge within the global system (e.g. climate change, nuclear weapons, and advanced artificial intelligence). Critically, many of the most salient hazards in the context of GCR emerge within the global system, and this is the focus of Section 3.1.

Figure 2. Key elements of our conceptual framework for understanding systemic contributions to global catastrophic risk. Hazards, whether from outside of the global system (e.g. asteroids and volcanic eruptions) or emerging within the global system (Section 3.1; nuclear weapons are one example), can interact with vulnerabilities (Section 3.3) to produce GCR. A key component of the interaction between hazards and vulnerabilities is amplification (Section 3.2). Finally, latent risk (Section 3.4) is a risk that may be generated by present-day phenomena but only becomes active in certain future system states: this may be particularly important in the aftermath of a global catastrophe. An important point is that each of these four phenomena (hazards, vulnerability, amplification, and latent risk) is in large part emergent from the global system.

We define vulnerability as the property of a system that determines the magnitude of the catastrophe resulting from a given hazard or combination of hazards.Footnote 8 Thus, if a system is highly vulnerable to a certain hazard, exposure to that hazard will lead to a large catastrophe. Vulnerability is a key concept in the study of disaster risk (UNDRR, 2019), and its importance for understanding GCR has been highlighted previously (Baum, Reference Baum2023; Liu et al., Reference Liu, Lauta and Maas2018; Mani et al., Reference Mani, Erwin, Johnson, Beard, Rees, Richards and Rojas2023). Different parts of the global system may be more or less vulnerable to different hazards. We use the plural ‘vulnerabilities’ (e.g. as in Figure 2) to denote elements of a system that make the system as a whole particularly vulnerable with respect to certain hazards. Importantly, there are signs that our overall vulnerability in the context of GCR has been increasing, and this is the focus of Section 3.3.

Our framework also includes amplification (Figure 2 and Section 3.2). We use amplification to describe the process of how a realised hazard or set of hazards may cause a catastrophe much larger than the direct impact itself. The potential for such (often non-linear) amplification is a key characteristic of CASs, and highlighting this concept allows for a clearer connection to the literature on emergent global risk. Amplification is deeply related to, but distinct from, vulnerability: vulnerability is a property of a system, while amplification is a process occurring in response to an instantiated hazard.

Finally, the fourth component of our framework is latent risk. Essentially, latent risk is risk that may be generated by present-day phenomena but only becomes active in certain future system states. One particularly severe instance of latent risk concerns the aftermath of global catastrophes, and this is what is focused on in Figure 2. Latent risk is discussed in more detail in Section 3.4.

We note that our framework (Figure 2) is not a complete causal model of GCR. Specifically, the statement that any of the above phenomena ‘emerges within’ the global system hides a more complex truth: they often emerge from specific parts of the global system (not all parts of the global system contribute equally to risk) as well as the actions of specific actors. While we do not shy away from these issues, there is a limit to what can be achieved within one article. Here, we view our focus on the global system (rather than its subsystems) as a necessary simplification: future work can and should elaborate much more on the details (see also Section 5.3).

3.1. Hazards

As outlined earlier, we define hazards as the events and processes that can serve as the proximal causes of undesirable outcomes. While early discussions of GCR often focused on exogenous hazards like asteroid impacts, many of the most concerning hazards facing humanity emerge from within the global system. For instance, anthropogenic climate change occurs due to humanity's consumption of fossil fuels (IPCC, Reference Masson-Delmotte, Zhai, Pirani, Connors, Péan, Berger, Caud, Chen, Goldfarb, Gomis, Huang, Leitzell, Lonnoy, Matthews, Maycock, Waterfield, Yelekçi, Yu and Zhou2021). Technological as well as geopolitical developments led to the creation of nuclear weapons, the maintenance of nuclear arsenals, and thus the risk of global nuclear war (Jacobsen, Reference Jacobsen2024; Waltz & Sagan, Reference Waltz and Sagan1995). Environmental encroachment promotes the emergence of new pandemic hazards (Jones et al., Reference Jones, Grace, Kock, Alonso, Rushton, Said, McKeever, Mutua, Young, McDermott and Pfeiffer2013; Singh, Reference Singh2021), and advances in biotechnology may do the same in the future (Millett & Snyder-Beattie, Reference Millett and Snyder-Beattie2017; Musunuri et al., Reference Musunuri, Sandbrink, Monrad, Palmer and Koblentz2021). Despite many concerns about the societal impacts of AI, the development of more and more powerful AI models is essentially racing full steam ahead, in large part due to economic and geopolitical dynamics (Brandt et al., Reference Brandt, Kreps, Meserole, Singh and Sisson2022; Lee, Reference Lee2018).

Why are these hazards being created? One possible answer focuses on the actions of agents. For example, Kemp (Reference Kemp2021) has argued that a small number of ‘agents of doom’ pursue power and profit at the expense of creating these hazards for the rest of humanity. Seventy-eight corporate and state fossil fuel-producing entities are responsible for more than 70% of total cumulative carbon dioxide emissions (Carbon Majors, 2024), only nine states possess nuclear weapons (Herre et al., Reference Herre, Rosado, Roser and Hasell2024), and only a handful of companies are currently leading the ‘AI arms race’.Footnote 9 Understanding how responsibility for hazard creation may be concentrated among a small number of actors is important not only for its own sake but also for its instrumental value: it can provide important insights about ways to lower GCR (Jones, Reference Jones, Beard, Rees, Richards and Rojas2023).

Nevertheless, part of the problem is more pernicious: these actors exist within systems in which it makes sense – or appears to make sense – for them to create risk. Governments justify their maintenance of nuclear arsenals based on principles of strategic rationality (Amadae, Reference Amadae2015). Companies extract and sell fossil fuels because they can make a large profit doing so, and individuals are constrained and incentivised to consume them. One approach for describing this behaviour is using game theory: in particular, the ‘tragedy of the commons’ highlights that it can seem individually rational to help deplete certain resources, including our collective security from global risk (Barrett, Reference Barrett2016; Posner, Reference Posner2004). However, such rational-actor narratives have key limitations (Dietz et al., Reference Dietz, Ostrom and Stern2003; Ostrom, Reference Ostrom1990) and can also serve as part of the problem, by themselves encouraging hazard-creating actions (Amadae, Reference Amadae2015).

Going a step further, the incentives for actors to create global hazards exist in part because national and international institutions have not taken effective regulatory action to prevent these incentives from existing. While this is in part because these institutions themselves have little incentive to understand and prepare for large-scale unprecedented events (Posner, Reference Posner2004; Wiener, Reference Wiener2016), it is also due to a number of other issues: some are discussed below, and others in Section 4. In any case, while actors remain free to create global hazards for power and profit, some will choose to do so.

Hazard-creating systems can also entrench themselves in ways that make change difficult. For example, in the case of climate change, fossil fuel companies sowed a sophisticated, decades-long disinformation campaign to mislead the public (Oreskes & Conway, Reference Oreskes and Conway2011; Supran et al., Reference Supran, Rahmstorf and Oreskes2023). They are also supported by a range of governmental subsidies and tax incentives, which actively make decarbonisation less economically feasible (Seto et al., Reference Seto, Davis, Mitchell, Stokes, Unruh and Ürge-Vorsatz2016). At a systems level, there are effectively stabilising feedbacks in play: the fossil fuel–industrial complex is deeply resistant to change. Understanding these kinds of mechanisms is vital for hazard reduction.

Ultimately, these patterns of hazard creation reflect a fundamental fact about CASs. Because selection (e.g. biological, cultural, or economic) occurs at lower levels (Levin, Reference Levin1998; Levin et al., Reference Levin, Xepapadeas, Crépin, Norberg, De Zeeuw, Folke, Hughes, Arrow, Barrett, Daily, Ehrlich, Kautsky, Mäler, Polasky, Troell, Vincent and Walker2013), the behaviours which are selected for are not necessarily beneficial to the system as a whole. This has also been described using the concept of ‘evolutionary traps’ (Søgaard Jørgensen et al., Reference Søgaard Jørgensen, Jansen, Avila Ortega, Wang-Erlandsson, Donges, Österblom, Olsson, Nyström, Lade, Hahn, Folke, Peterson and Crépin2024). In this case, some of the selected behaviours create global hazards.

3.2. Amplification

The global system can also amplify hazards. Due to the process of amplification, a realised hazard or set of hazards may cause a catastrophe much larger than the direct impact itself. The March 2021 blocking of the Suez Canal by the container ship Ever Given, with consequences of billions of US dollars in lost trade income (Russon, Reference Russon2021), was a clear demonstration of how small hazards can be amplified within the global system. More generally, the possibility of non-linear (i.e. disproportionate) amplification of risk by the global system or its many subsystems has been the focus of much of the work reviewed in Section 2.3 (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Frank et al., Reference Frank, Collins, Levin, Lo, Ramo, Dieckmann, Kremenyuk, Kryazhimskiy, Linnerooth-Bayer, Ramalingam, Roy, Saari, Thurner and von Winterfeldt2014; Helbing, Reference Helbing2013; Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015; Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024; Renn et al., Reference Renn, Lucas, Haas and Jaeger2017, Reference Renn, Laubichler, Lucas, Kröger, Schanze, Scholz and Schweizer2022; Sillmann et al., Reference Sillmann, Christensen, Hochrainer-Stigler, Huang-Lachmann, Juhola, Kornhuber, Mahecha, Mechler, Reichstein, Ruane, Schweizer and Williams2022; Spaiser et al., Reference Spaiser, Juhola, Constantino, Guo, Watson, Sillmann, Craparo, Basel, Bruun, Krishnamurthy, Scheffran, Pinho, Okpara, Donges, Bhowmik, Yasseri, Safra de Campos, Cumming, Chenet and Spears2024).

For a simple conceptual understanding of amplification in the context of GCR, we again consider the simple metaphor of a ball in a stability landscape (Figure 3). We conceive of the global system as existing in a state of dynamic quasi-equilibrium: it is constantly evolving, yet its basic function and structure persist, and minor shocks can be recovered from. Yet other possible system states exist, some of which would constitute global catastrophic outcomes. Hazards (Section 3.1) can push the system out of its current basin of attraction towards one of these (normatively) worse states, whereupon the effect of the hazard is amplified by feedbacks internal to the system. Vulnerability enters this picture in multiple ways: how shallow is the original basin of attraction (how weak are the initial stabilising feedbacks), how strong are the feedbacks driving the system to the new outcome, and how severe (i.e. normatively bad) is the new outcome? The concept of resilience also relates to the depth of the original basin of attraction, but in the opposite direction: here, a decrease in resilience is an increase in vulnerability.

Figure 3. Amplification in the context of global catastrophic risk. Hazards (modulated by exposure) threaten the system's persistence in its current basin of attraction and can set in motion runaway evolution towards global catastrophic outcomes (amplification). The broad notion of vulnerability relates to multiple things here: how deep are the two basins of attraction, and how bad is the global catastrophic outcome? We emphasise that this picture is vastly oversimplified (see Section 3.2), but it captures important elements of the problem.

This picture is of course an oversimplification. There are many possible kinds of global catastrophic outcomes, many possible pathways to them, and the degree of amplification will vary based on the specific case. Figure 3 depicts amplification as occurring via a tipping point,Footnote 10 but this does not need to be the case: amplification can also occur without tipping points. Amplification can also occur when multiple co-occurring hazards cause more damage together than they would have separately, or when one hazard triggers another hazard. The key point is that emergent behaviour in CASs can lead to an amplification of hazards. Figure 3 highlights this and also allows us to make important conceptual connections.

Amplification would play an important role in a wide range of global catastrophe scenarios. The impacts of natural hazards like asteroid impacts and volcanic eruptions would be amplified through their effects on global critical infrastructure (Baum, Reference Baum2023; Mani et al., Reference Mani, Erwin, Johnson, Beard, Rees, Richards and Rojas2023; Moersdorf et al., Reference Moersdorf, Rivers, Denkenberger, Breuer and Jehn2023); indeed, the clustering of such infrastructure near centres of volcanic activity vastly amplifies the risk even from lower-magnitude volcanic eruptions (Mani et al., Reference Mani, Tzachor and Cole2021). As the COVID-19 pandemic has illustrated, the impact of novel infectious diseases can be vastly amplified by global transit networks (Baker et al., Reference Baker, Mahmud, Miller, Rajeev, Rasambainarivo, Rice, Takahashi, Tatem, Wagner, Wang, Wesolowski and Metcalf2022) as well as by follow-on economic and social disruption. The worst outcomes from climate change will likely not arise directly due to increased temperatures, but rather indirectly through phenomena like conflict, famine, and mass displacement (Beard et al., Reference Beard, Holt, Tzachor, Kemp, Avin, Torres and Belfield2021; Kemp et al., Reference Kemp, Xu, Depledge, Ebi, Gibbins, Kohler, Rockström, Scheffer, Schellnhuber, Steffen and Lenton2022; Richards et al., Reference Richards, Lupton and Allwood2021). The effects of nuclear war could be amplified by subsequent global cooling, leading to large-scale global starvation (Xia et al., Reference Xia, Robock, Scherrer, Harrison, Bodirsky, Weindl, Jägermeyr, Bardeen, Toon and Heneghan2022). Developments in AI capabilities could be amplified towards catastrophic outcomes well before the emergence of artificial general intelligence, for example, via societal and economic destabilisation (Kasirzadeh, Reference Kasirzadeh2025; Kulveit et al., Reference Kulveit, Douglas, Ammann, Turan, Krueger and Duvenaud2025) and interactions with biological and nuclear risk (Bengio et al., Reference Bengio, Mindermann, Privitera, Besiroglu, Bommasani, Casper, Choi, Fox, Garfinkel, Goldfarb, Heidari, Ho, Kapoor, Khalatbari, Longpre, Manning, Mavroudis, Mazeika, Michael and Zeng2025; EBRC, 2023; Hendrycks et al., Reference Hendrycks, Mazeika and Woodside2023; Maas et al., Reference Maas, Lucero-Matteucci, Cooke, Beard, Rees, Richards and Rojas2023)

The work on emergent global risk (Section 2.3) helps us understand further details of how the global system can amplify hazards. Amplification on networks can be understood in terms of contagion and cascading failure (Helbing, Reference Helbing2013; Krönke et al., Reference Krönke, Wunderling, Winkelmann, Staal, Stumpf, Tuinenburg and Donges2020; Newman, Reference Newman2018) and has played a key role in conceptualisations of global systemic risk (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015). Many of the amplifying effects alluded to in the above paragraph result from non-linear social dynamics (Miller & Page, Reference Miller and Page2009; Schelling, Reference Schelling1978) and could be understood in terms of social tipping points with negative outcomes (Juhola et al., Reference Juhola, Filatova, Hochrainer-Stigler, Mechler, Scheffran and Schweizer2022; Spaiser et al., Reference Spaiser, Juhola, Constantino, Guo, Watson, Sillmann, Craparo, Basel, Bruun, Krishnamurthy, Scheffran, Pinho, Okpara, Donges, Bhowmik, Yasseri, Safra de Campos, Cumming, Chenet and Spears2024). We briefly note that such amplifying social dynamics can include human responses to hazards: as one example, trade restrictions in response to global food price shocks usually increase prices further (Alexander et al., Reference Alexander, Arneth, Henry, Maire, Rabin and Rounsevell2023; Clapp & Moseley, Reference Clapp and Moseley2020). In practice, amplification would involve a complicated web of non-linear change within systems as well as interactions between systems. Approaches such as the stress-trigger-crisis model of Lawrence et al. (Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024) or the earlier causal archetypes of the ‘synchronous failure’ model (Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015) could prove very helpful in understanding this (Section 5.2).

One particularly severe outcome of amplification in the context of GCR could be global societal collapse. While there are many ways to define the latter, one simple, forward-looking option in our case is a rapid development of the global system towards a state where it is no longer able to provide (as it currently does) for the material subsistence of most humans.Footnote 11 It is unclear whether such a collapse could be recovered from (Baum et al., Reference Baum, Armstrong, Ekenstedt, Häggström, Hanson, Kuhlemann, Maas, Miller, Salmela, Sandberg, Sotala, Torres, Turchin and Yampolskiy2019; Belfield, Reference Belfield, Centeno, Callahan, Larcey and Patterson2023). There has been much work on the subject of collapse, most notably in the case of past societies (Brozović, Reference Brozović2023; Centeno et al., Reference Centeno, Callahan, Larcey and Patterson2023; Tainter, Reference Tainter1988) but also in ecology (Cumming & Peterson, Reference Cumming and Peterson2017) and in complex evolutionary systems more generally (Arnscheidt & Rothman, Reference Arnscheidt and Rothman2022). While a wide range of possible mechanisms has been identified, one key consensus matters for our purposes: collapse may be set in motion by a particular hazard, but ultimately plays out due to feedbacks and mechanisms internal to the system (i.e. vulnerability and amplification).

3.3. Vulnerability

In the context of GCR, vulnerability is also emergent from within the global system. For instance, a key point of the literature on emergent global risk is that there is greater potential for amplification in the global system than there used to be (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Goldin & Mariathasan, Reference Goldin and Mariathasan2014; Helbing, Reference Helbing2013; Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015; Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024). This increased potential for amplification is one form of increased vulnerability. The severity of possible outcomes may also be worse now than in the past; while the emergence of new and dangerous technologies (i.e. hazards) has played a role in this (Ord, Reference Ord2020; Rees, Reference Rees2004), so too has the fragility and potential irreplaceability of global critical infrastructure (Manheim, Reference Manheim2020). Here, we frame our analysis around three key sources of emerging vulnerability: increased global interconnectedness, decreased global diversity (in a variety of domains, particularly in terms of possible responses to disruptions), and humanity’s reliance on advanced technology as well as complex sociotechnical systems.

The trend of increasing global interconnectedness is readily apparent. For example, global food trade flows have increased substantially in the last few decades (D'Odorico et al., Reference D'Odorico, Davis, Rosa, Carr, Chiarelli, Dell'Angelo, Gephart, MacDonald, Seekell, Suweis and Rulli2018; Puma et al., Reference Puma, Bose, Chon and Cook2015), and the yearly number of air traffic passengers doubled from 2 to 4 billion between 2000 and 2019 (facilitating the potential spread of pandemics; Baker et al., Reference Baker, Mahmud, Miller, Rajeev, Rasambainarivo, Rice, Takahashi, Tatem, Wagner, Wang, Wesolowski and Metcalf2022). Increased interconnectedness in the modern global system is typically identified as a major driver of amplification and global systemic risk (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Goldin & Mariathasan, Reference Goldin and Mariathasan2014; Helbing, Reference Helbing2013; Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015; Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024; Sillmann et al., Reference Sillmann, Christensen, Hochrainer-Stigler, Huang-Lachmann, Juhola, Kornhuber, Mahecha, Mechler, Reichstein, Ruane, Schweizer and Williams2022). Interconnectedness decreases a system's susceptibility to smaller shocks, by allowing the flow of resources to make up for localised shortfalls, but increases its susceptibility to larger disruptions, by allowing failures to cascade (Foti et al., Reference Foti, Pauls and Rockmore2013; Helbing, Reference Helbing2013; Scheffer et al., Reference Scheffer, Carpenter, Lenton, Bascompte, Brock, Dakos, Van de Koppel, Van de Leemput, Levin and Van Nes2012; Young et al., Reference Young, Berkhout, Gallopin, Janssen, Ostrom and Van der Leeuw2006). An illustrative thought experiment is the following (Siegenfeld & Bar-Yam, Reference Siegenfeld and Bar-Yam2020): imagine you have 100 ladders leaning up against a wall, and then you tie them all together. Each individual ladder is much less likely to fall, but if they do fall they will all fall at once.

A second important trend affecting vulnerability is a loss of global diversity. This has been occurring in a wide range of contexts, from language to institutions to biology (Williams et al., Reference Williams, Zalasiewicz, Haff, Schwägerl, Barnosky and Ellis2015; Young et al., Reference Young, Berkhout, Gallopin, Janssen, Ostrom and Van der Leeuw2006). One specific instructive example is in the global food system: food production is increasingly reliant on a small number of staple grain species, dominated by a small number of companies, and dominated by a small number of countries (Clapp, Reference Clapp2023; Nyström et al., Reference Nyström, Jouffray, Norström, Crona, Søgaard Jørgensen, Carpenter, Bodin, Galaz and Folke2019). This allows for greatly increased short-term productivity, but makes us more vulnerable should conditions suddenly change: for example, what happens if one of these staple grain species or key global suppliers fails for some reason? More generally, the loss of diversity is an issue because diversity is part of how CASs retain resilience (Folke et al., Reference Folke, Carpenter, Walker, Scheffer, Elmqvist, Gunderson and Holling2004; Levin, Reference Levin1998; Levin et al., Reference Levin, Xepapadeas, Crépin, Norberg, De Zeeuw, Folke, Hughes, Arrow, Barrett, Daily, Ehrlich, Kautsky, Mäler, Polasky, Troell, Vincent and Walker2013). A particularly useful framing is that of response diversity: maintaining a variety of potential response behaviours provides complex systems – including the global system – with the ‘raw material’ for adaptive capacity after disruptions (Walker et al., Reference Walker, Crépin, Nyström, Anderies, Andersson, Elmqvist, Queiroz, Barrett, Bennett, Cardenas, Carpenter, Chapin, de Zeeuw, Fischer, Folke, Levin, Nyborg, Polasky, Segerson and Vincent2023).

A third key trend is humanity's increasing reliance on advanced technology – and indeed, on complex sociotechnical systems (composed of humans and their interactions with technology) that no individual human fully understands. Returning to the food system example, developments in industrial agriculture (breeding new high-yield crop strains, large-scale fertiliser production, and machinery) vastly increased global yields throughout the 20th century (Evenson & Gollin, Reference Evenson and Gollin2003; Smil, Reference Smil2004); yet, most humans now depend on these technologies for their survival (Moersdorf et al., Reference Moersdorf, Rivers, Denkenberger, Breuer and Jehn2023; Smil, Reference Smil2022). Greater integration of AI into agricultural systems (Galaz et al., Reference Galaz, Centeno, Callahan, Causevic, Patterson, Brass, Baum, Farber, Fischer, Garcia, McPhearson, Jimenez, King, Larcey and Levy2021; Tzachor et al., Reference Tzachor, Devare, King, Avin and Ó hÉigeartaigh2022) will amplify this dependence. Technological dependence can interplay with global interconnectedness: as a smaller-scale case study, we can consider the recent shortage of new cars in the United States due to semiconductor shortages on the other side of the globe (Dziczek, Reference Dziczek2022).

In practice, we depend not just on the advanced technologies themselves but also on the increasingly complex sociotechnical systems within which the technologies are manufactured, distributed, and used. Sociotechnical complexity, despite its other benefits, sets us up for hard-to-prevent cascading failures (Perrow, Reference Perrow1999). More critically, increases in complexity are often irreversible, as important infrastructure and knowledge pertaining to older approaches are lost (Manheim, Reference Manheim2020) – increasing the severity of the worst-case outcomes.

Of course, these three trends have brought substantial benefits. Beyond the other benefits of globalisation, increased interconnectedness reduces the risk of smaller disruptions. Reduced diversity and advanced technology both allow for increased productivity and efficiency under a specific set of circumstances and thus plausibly also help the global system buffer against certain smaller shocks. Yet, despite these benefits, we suggest that each of these three trends increase the likelihood and potential severity of global catastrophic outcomes. Interconnectedness allows failure to spread much more quickly to larger scales, and a lack of diversity means the global system will struggle to adapt to certain unexpected disruptions. The dependence on technology means that scenarios involving some loss or failure of this technology lead to much more catastrophic outcomes than they otherwise would.

If our vulnerability in the context of GCR has indeed been increasing, why is this? At one level of explanation, we can highlight economic incentives to prioritise efficiency (short-term reliable productivity) over resilience. This is particularly apparent in ecosystem management, where there is a long history of humans attempting to ‘optimise’ an ecosystem (e.g. for productivity) and later finding, often at great cost, that key elements of resilience were lost in the process (Holling & Meffe, Reference Holling and Meffe1996; Scott, Reference Scott1998). For much the same reasons which lead to insufficient governance of hazard-creating actors (Section 3.1), there is insufficient governance of vulnerability-creating actors. As long as these actors (individuals, companies, or states) can obtain a short-term gain from actions that increase global vulnerability (prioritising efficiency over resilience), they will do so.

Yet, things are also more complicated. Many of the key factors driving global vulnerability – including the three trends outlined earlier, are ultimately emerging in a complex decentralised manner. As one additional example, the fact that our complex global infrastructure can develop highly connected hubs that propagate disruptions (e.g. Mani et al., Reference Mani, Tzachor and Cole2021) is due to fundamental aspects of bottom-up network development, such as preferential attachment (Newman, Reference Newman2018), which are very difficult to avoid. Nevertheless, governments and global institutions can proactively invest in resilience, for example, by investing in and promoting response diversity (Walker et al., Reference Walker, Crépin, Nyström, Anderies, Andersson, Elmqvist, Queiroz, Barrett, Bennett, Cardenas, Carpenter, Chapin, de Zeeuw, Fischer, Folke, Levin, Nyborg, Polasky, Segerson and Vincent2023), actively maintaining backups for key critical infrastructure systems, carefully modularising those systems (see Tzachor et al. (Reference Tzachor, Richards and Holt2021) as one example), and so on.

Ultimately, much like the emergent generation of hazards, the generation of vulnerability is a deep consequence of the fact that selection in CASs occurs at lower levels (Levin, Reference Levin1998; Levin et al., Reference Levin, Xepapadeas, Crépin, Norberg, De Zeeuw, Folke, Hughes, Arrow, Barrett, Daily, Ehrlich, Kautsky, Mäler, Polasky, Troell, Vincent and Walker2013) and is thus not necessarily beneficial to the system as a whole. The generation of vulnerability in the global system can also be related to the growth phase of the adaptive cycle (Homer-Dixon, Reference Homer-Dixon2006): this involves precisely increasing connectivity and decreasing diversity.

3.4. Latent risk

Another, perhaps more easily overlooked, systemic contribution to GCR involves latent risk. Latent risk refers to risk that is dormant under one set of conditions but becomes active under another (Kemp, Reference Kemp2021; Kemp et al., Reference Kemp, Xu, Depledge, Ebi, Gibbins, Kohler, Rockström, Scheffer, Schellnhuber, Steffen and Lenton2022). One illustrative example of latent risk on a global scale is the following (Kemp, Reference Kemp2021): while stratospheric aerosol injection (SAI) could cool the planet and potentially reduce global warming, it also introduces the risk of ‘termination shock’, in which the planet warms very quickly if aerosol injection were to suddenly stop (Parker & Irvine, Reference Parker and Irvine2018). Here, the risk is created by the introduction of SAI, but remains latent at first – it is only activated if SAI suddenly stops.

Latent risk can also be thought of as reflecting path dependence within the global system. Path dependence means that the evolution of the system depends not just on its present state but also on its history – in other words, the path taken to get there – and is a fundamental property of CASs (Levin, Reference Levin1998). The idea of latent risk highlights that the global system may evolve along different ‘paths’ in the future, that on some of these paths humanity is exposed to a much greater risk of catastrophic outcomes than on others, and, critically, that events and processes taking place in the present can affect the magnitude of the risk faced in these future system states. While one could try to conceptualise this behaviour purely in terms of the interaction of hazards and vulnerabilities over time, we find the concept of latent risk a particularly powerful one and thus include it as the fourth component of our conceptual framework.

The global system is actively generating latent risk. For example, beyond its immediate impacts, climate change could hamper humanity's ability to recover from other catastrophes (Kemp et al., Reference Kemp, Xu, Depledge, Ebi, Gibbins, Kohler, Rockström, Scheffer, Schellnhuber, Steffen and Lenton2022). More generally, if any of the global critical systems on which humanity depends on were to fail (Avin et al., Reference Avin, Wintle, Weitzdörfer, Ó hÉigeartaigh, Sutherland and Rees2018), this might trap us in a state where that system could not be regenerated, and would also decrease our species’ resilience to further catastrophes, such as human extinction. If increasing sociotechnical complexity leads to the loss of simpler alternatives (Manheim, Reference Manheim2020; see also Section 3.3), this present-day trend increases the latent risk in such scenarios. To illustrate these issues while maintaining a focus on the worst-case outcomes, we now consider how systemic risk would manifest after global societal collapse (noting that any reasoning about post-collapse worlds is necessarily speculative).

After a collapse, emergent phenomena might trap the system in the collapsed state. Various theories of human civilisational development – i.e. the development of the global system – emphasise the role of amplifying feedbacks. For example, the industrial revolution may have been substantially driven by feedback cycles that rapidly increased human access to energy (Lenton & Scheffer, Reference Lenton and Scheffer2024). However, if the global system were to collapse and advanced technology were to be lost, it is not clear that this pattern could be repeated, because most easily accessible fossil fuels would have been used up (Baum et al., Reference Baum, Armstrong, Ekenstedt, Häggström, Hanson, Kuhlemann, Maas, Miller, Salmela, Sandberg, Sotala, Torres, Turchin and Yampolskiy2019; Belfield, Reference Belfield, Centeno, Callahan, Larcey and Patterson2023).Footnote 12 If amplifying feedbacks initially propelled the global system to its present state, the lack of such feedbacks – and thus, a dominance of stabilising feedbacks – could prevent re-industrialisation after a collapse.

After a collapse, emergent phenomena could also substantially increase the risk of human extinction. Without industrial methods of food production, the global population would fall to a substantially lower level and may also become geographically disconnected. Surviving human populations could then face non-linear ecological dynamics akin to those faced by non-human species (May, Reference May1977; Scheffer et al., Reference Scheffer, Carpenter, Foley, Folke and Walker2001), including runaway extinction if population numbers fall below a minimum size.Footnote 13 Crucially, minimum viable population sizes are often set by the influence of population size on the ability to effectively cooperate (Stephens & Sutherland, Reference Stephens and Sutherland1999) and may thus be much higher than those predicted purely on the basis of genetics (Baum et al., Reference Baum, Armstrong, Ekenstedt, Häggström, Hanson, Kuhlemann, Maas, Miller, Salmela, Sandberg, Sotala, Torres, Turchin and Yampolskiy2019). While complete human extinction remains a high bar, one key point is worth emphasising: because of latent risk and emergent phenomena, a given catastrophe does not need to make Earth completely uninhabitable to ultimately lead to human extinction.

4. Challenges for assessing and mitigating systemic contributions to GCR

Assessing and mitigating GCR is challenging for a number of reasons. Some issues are endemic to risk analysis as a whole: for example, there is a large literature on individual human perceptions of risk, and their socially mediated amplification and/or attenuation (Kasperson et al., Reference Kasperson, Renn, Slovic, Brown, Emel, Goble, Kasperson and Ratick1988, Reference Kasperson, Webler, Ram and Sutton2022; Slovic, Reference Slovic1987). Others are more specific to GCR: for example, future unprecedented high-impact scenarios are fundamentally difficult to study (Beard et al., Reference Beard, Rowe and Fox2020; Currie, Reference Currie2019; Yudkowsky, Reference Yudkowsky, Bostrom and Cirkovic2008) and to institutionally prepare for (Posner, Reference Posner2004; Wiener, Reference Wiener2016). Reviewing all of these issues is beyond the scope of this article. Instead, in this section, we briefly highlight some issues specifically related to systemic sources of GCR.

First, emergent complexity makes it impossible to understand the full state of the global system (i.e. all of its interconnections, structures, and dependencies); to the extent to which GCR emerges from this complexity, this makes it more difficult to understand GCR. Again, ‘complex’ should be distinguished from ‘complicated’: systems with only the latter property are ultimately fully knowable and manageable in a top-down manner, while those with the former are not (Kreienkamp & Pegram, Reference Kreienkamp and Pegram2020). When the impossibility of knowing the full system state is combined with the possibility of sudden non-linear disruption, we further find that abrupt surprises are to be expected (Duit & Galaz, Reference Duit and Galaz2008; Siegenfeld & Bar-Yam, Reference Siegenfeld and Bar-Yam2020; see also Taleb, Reference Taleb2007). With respect to GCR, this puts us fundamentally into the realm of ‘deep uncertainty’ (W. E. Walker et al., Reference Walker, Lempert, Kwakkel, Gass and Fu2013), where we know (or can agree on) neither the full set of possible outcomes nor their likelihoods. Complexity and deep uncertainty fundamentally challenge existing risk assessment and governance paradigms (Currie, Reference Currie2019; Duit & Galaz, Reference Duit and Galaz2008; Kreienkamp & Pegram, Reference Kreienkamp and Pegram2020; Schweizer, Reference Schweizer2021; Schweizer & Juhola, Reference Schweizer and Juhola2024).

The complex structure of the global system also makes it difficult to mitigate against GCR. For example, the relatively decentralised nature of the international system leaves us with profound governance gaps regarding both GCR and global systemic risk (Blake & Gilman, Reference Blake and Gilman2024; Goldin & Vogel, Reference Goldin and Vogel2010; Rhodes & Kemp, Reference Rhodes, Kemp, Beard and Hobson2024), coordination problems, and a lack of identifiable risk owners. Top-down control is not necessarily better: existing top-down paradigms and institutions also struggle to govern systemic contributions to GCR (Kreienkamp & Pegram, Reference Kreienkamp and Pegram2020; Sundaram, Reference Sundaram, Beard, Rees, Richards and Rojas2023). Any system created to help govern the complexity becomes a part of the complexity (Fisher & Sandberg, Reference Fisher and Sandberg2022), with a non-zero chance of making things worse. As noted previously, beyond our own individual biases, our governing institutions are structured such that there are strong incentives to undervalue GCR mitigation, for example, due to short political time horizons (Posner, Reference Posner2004; Wiener, Reference Wiener2016).

Developing comprehensive solutions to the above issues is a huge challenge; nevertheless, there are some brief points we can make. First, we need better methods for assessing and forecasting under deep uncertainty: structured (expert) elicitation methods and collective intelligence may be useful here (Cremer & Whittlestone, Reference Cremer and Whittlestone2021; Marcoci et al., Reference Marcoci, Wilkinson, Abatayo, Baskin, Berkman, Buchanan, Capitán, Capitán, Chan, Cheng, Coupé, Dryhurst, Duan, Edlund, Errington, Fedor, Fidler, Field, Fox, Fraser and van der Linden2025; Yang & Sandberg, Reference Yang and Sandberg2023; Zhou et al., Reference Zhou, Moreno-Casares, Martínez-Plumed, Burden, Burnell, Cheke, Ferri, Marcoci, Mehrbakhsh, Moros-Daval, Ó hÉigeartaigh, Rutar, Schellaert, Voudouris and Hernández-Orallo2024). Second, and perhaps more fundamentally, we need to recognise the weaknesses of the ‘legacy toolkit’ of top-down planned management for governing complexity, and that the traditional goals of certainty and control are not attainable. Instead, governance should be more dynamic, flexible, and adaptive: others have provided a number of recommendations for achieving this (Duit & Galaz, Reference Duit and Galaz2008; Fisher & Sandberg, Reference Fisher and Sandberg2022; Kreienkamp & Pegram, Reference Kreienkamp and Pegram2020). Filling the governance gaps (Blake & Gilman, Reference Blake and Gilman2024; Goldin & Vogel, Reference Goldin and Vogel2010; Rhodes & Kemp, Reference Rhodes, Kemp, Beard and Hobson2024) is another clear priority.

5. Discussion

5.1. Beyond hazard-focused frameworks for understanding GCR

The ideas presented in this article can help us better understand GCR in a number of ways. Perhaps most fundamentally, building on earlier work (Liu et al., Reference Liu, Lauta and Maas2018), we have made clear that GCR is composed of much more than hazards. Much GCR literature has tended to implicitly conflate hazard and risk: this is typically apparent in any list of ‘global catastrophic risks’ or ‘existential risks’ (Bostrom & Cirkovic, Reference Bostrom and Cirkovic2008; Cotton-Barratt et al., Reference Cotton-Barratt, Farquhar, Halstead, Schubert and Snyder-Beattie2016; Ord, Reference Ord2020). While focusing on hazards was useful in early GCR work, this critically neglects amplification, vulnerability, and so on. A useful (though non-absolute) heuristic may be to say that we face global catastrophic risk – not risks – and that the various things we are concerned about are those which contribute to this overall risk.

Building on these ideas, our framework (Figure 2) provides one way to think about systemic contributors to risk beyond hazards. There are other ways to do so, for example, in terms of hazard, vulnerability, and exposure (Liu et al., Reference Liu, Lauta and Maas2018); in terms of the critical systems affected, spread mechanisms, and prevention and mitigation failures (Avin et al., Reference Avin, Wintle, Weitzdörfer, Ó hÉigeartaigh, Sutherland and Rees2018); and in terms of prevention, response, and resilience opportunities (Cotton-Barratt et al., Reference Cotton-Barratt, Daniel and Sandberg2020). All of these frameworks complement each other. Advantages of our framework include the focus on amplification, which allows for a clearer connection to the literature on emergent global risk (Section 2.3; see also Section 5.2), the focus on the origins of global vulnerability, the fundamental grounding in CAS theory, and the inclusion of latent risk.

Our framework can be used to structure thinking about the sources of GCR, and potentially to categorise and generate specific interventions for GCR mitigation. GCR can be reduced by reducing the probability or magnitude of hazards (as noted by much other work), reducing the potential for amplification, avoiding the emergence of vulnerability in the first place, and reducing latent risk. More indirectly (Section 4), GCR can also be reduced by developing better methods and institutions for GCR assessment and governance under complexity.

5.2. The emergence of GCR from the global system

A second contribution of this article is in thoroughly connecting the literature on GCR with the literature on emergent global risk (Section 2.3). Our conceptual framework highlights the synergies between these two fields, as well as what they can learn from each other.

GCR research can learn from research on emergent global risk that GCR is much more endogenous to the global system than is often assumed, and can acquire useful insights about how this works. Specifically, it can draw on the large volume of work on the amplification of hazards, via concepts like systemic risk (Centeno et al., Reference Centeno, Nag, Patterson, Shaver and Windawi2015; Renn et al., Reference Renn, Lucas, Haas and Jaeger2017, Reference Renn, Laubichler, Lucas, Kröger, Schanze, Scholz and Schweizer2022; Sillmann et al., Reference Sillmann, Christensen, Hochrainer-Stigler, Huang-Lachmann, Juhola, Kornhuber, Mahecha, Mechler, Reichstein, Ruane, Schweizer and Williams2022), hyper-risk (Helbing, Reference Helbing2013), femtorisk (Frank et al., Reference Frank, Collins, Levin, Lo, Ramo, Dieckmann, Kremenyuk, Kryazhimskiy, Linnerooth-Bayer, Ramalingam, Roy, Saari, Thurner and von Winterfeldt2014), synchronous failure (Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015), polycrisis (Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024), and negative social tipping points (Juhola et al., Reference Juhola, Filatova, Hochrainer-Stigler, Mechler, Scheffran and Schweizer2022; Spaiser et al., Reference Spaiser, Juhola, Constantino, Guo, Watson, Sillmann, Craparo, Basel, Bruun, Krishnamurthy, Scheffran, Pinho, Okpara, Donges, Bhowmik, Yasseri, Safra de Campos, Cumming, Chenet and Spears2024). GCR research can also learn from research on the deep trends in global vulnerability (Homer-Dixon et al., Reference Homer-Dixon, Walker, Biggs, Crépin, Folke, Lambin, Peterson, Rockström, Scheffer, Steffen and Troell2015; Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024; Nyström et al., Reference Nyström, Jouffray, Norström, Crona, Søgaard Jørgensen, Carpenter, Bodin, Galaz and Folke2019; Walker et al., Reference Walker, Crépin, Nyström, Anderies, Andersson, Elmqvist, Queiroz, Barrett, Bennett, Cardenas, Carpenter, Chapin, de Zeeuw, Fischer, Folke, Levin, Nyborg, Polasky, Segerson and Vincent2023).

Going the other direction, scholars of emergent global risk could take inspiration from GCR to engage more deeply with worst-case outcomes. The study of the amplification of hazards, as well as of deep trends in global vulnerability, could be straightforwardly extended to a greater scale: what about scenarios such as global collapse? Furthermore, while research on emergent global risk has had much to say about amplification and vulnerability, it has had little to say about hazards that are not climate change (e.g. nuclear weapons, pandemics, and advanced AI): one notable recent exception is the work by Søgaard Jørgensen et al. (Reference Søgaard Jørgensen, Jansen, Avila Ortega, Wang-Erlandsson, Donges, Österblom, Olsson, Nyström, Lade, Hahn, Folke, Peterson and Crépin2024). There is much unexplored territory here, and our analysis helps point at some promising initial directions.

5.3. Limitations and future directions

We acknowledge that our conceptual framework has some major limitations. A first limitation stems from our focus on the global system. This was a necessary simplification for the purposes of this paper; however, it potentially obscures the fact that not all parts of the global system contribute equally to the creation of GCR. Indeed, the very focus on ‘systems’ can also obscure the important role played by human agency. Much more work could be done on understanding both of these issues and on integrating them within the broader perspective laid out in this article.

A second and more subtle limitation lies in the very framing of risk as instantiated by some hazard or set of hazards that occurs with some probability. Even with amplification, vulnerability, and more emergent phenomena included, this framing can still struggle to capture some of the complex causal webs that may be responsible for GCR. This is the case, for example, if hazards interact, trigger each other, and/or are slow-moving (Kuhlemann, Reference Kuhlemann2018; Liu et al., Reference Liu, Lauta and Maas2018). We need better tools for capturing and understanding complex pathways to global catastrophic outcomes.

Recent work on polycrisis (Lawrence et al., Reference Lawrence, Homer-Dixon, Janzwood, Rockstöm, Renn and Donges2024) provides an ambitious conceptual framework that could help with some of these issues. In particular, Lawrence et al. introduce their stress-trigger-crisis model (conceptualised similarly to Figure 3) in which slow stresses destabilise a system, triggers push it out of its basin of attraction, and crises occur when the system is on the cusp of exiting the basin. Each of these factors can interact within and across different systems – providing a conceptual means to understand how various subsystems of the global system interact to generate GCR. Their focus on ‘the realization of chains of cause and effect that cause harms’, rather than on risk, may further help to overcome the second limitation described earlier. However, we note that in the case of GCR, it is in part the focus on risk that allows us to speak with clarity about possible worst-case outcomes – and, if preventing such outcomes is a priority, it is critical that we are able to do so.

5.4. Systems-informed mitigation: leverage points for reducing GCR

With all this knowledge in hand, how best to reduce GCR? Beyond what we have already discussed (as discussed in Section 5.1), systems thinking has another critical insight to offer us: leverage points (Holland, Reference Holland1995; Meadows, Reference Meadows2008). Essentially, the possibility of non-linear change in the complex global system also cuts in a positive direction: the right intervention, in the right place, could plausibly have an outsized impact in terms of reducing GCR.

This idea has been increasingly applied in the context of climate change and the transition to a zero-carbon economy. Two framings are those of ‘sensitive intervention points’ (Farmer et al., Reference Farmer, Hepburn, Ives, Hale, Wetzer, Mealy, Rafaty, Srivastav and Way2019) and ‘positive tipping points’ (Geels & Ayoub, Reference Geels and Ayoub2023; Lenton et al., Reference Lenton, Benson, Smith, Ewer, Lanel, Petykowski, Powell, Abrams, Blomsma and Sharpe2022; Otto et al., Reference Otto, Donges, Cremades, Bhowmik, Hewitt, Lucht, Rockström, Allerberger, McCaffrey, Doe, Lenferna, Morán, van Vuuren and Schellnhuber2020; Winkelmann et al., Reference Winkelmann, Donges, Smith, Milkoreit, Eder, Heitzig, Katsanidou, Wiedermann, Wunderling and Lenton2022). While the latter concept is potentially more restrictive (not all instances of non-linear change are appropriately categorised as ‘tipping’), it has the advantage that there could be ‘early opportunity signals’Footnote 14 that such a transition is possible (Lenton et al., Reference Lenton, Benson, Smith, Ewer, Lanel, Petykowski, Powell, Abrams, Blomsma and Sharpe2022). Critically, tipping points (and leverage points more broadly) could be upward-scaling (Sharpe & Lenton, Reference Sharpe and Lenton2021), allowing for substantial change even in highly decentralised situations or to be driven by actors conventionally deemed as less powerful.

We suggest that applying these concepts to GCR reduction – identifying, categorising, and activating potential leverage points for the reduction of GCR – should be a major priority for future research. Although a synthesis across the different key risk drivers is currently lacking, the framing of identifying particularly effective intervention points has already been used in the context of GCR: one recent example highlights access to computational power (‘compute’) as a key point of leverage in AI governance (Sastry et al., Reference Sastry, Heim, Belfield, Anderljung, Brundage, Hazell, O'Keefe, Hadfield, Ngo, Pilz, Gor, Bluemke, Shoker, Egan, Trager, Avin, Weller, Bengio and Coyle2024).

5.5. Policy-making for systemic GCR reduction

A systemic understanding of GCR provides some important insights for policy-making; here, we briefly summarise a few key points. As discussed in Section 5.1, it is essential to broaden the assessment of risk beyond hazards, and not to conflate the two. Next, attempts to govern GCR must take into account the global system’s nature as a CAS (Section 4). This includes recognising deep uncertainty and employing better methods for understanding and anticipating GCR, as well as developing governance institutions that are more dynamic, flexible, and adaptive. A focus on resilience – specifically, resilience to global catastrophic hazards and/or outcomes – will be key. Identifying and activating leverage points (Section 5.4) could be helpful in implementing the required changes. Governments and intergovernmental institutions could set up central risk offices to act as risk owners, monitor contributors to GCR, conduct comprehensive risk assessments, and (democratically) plan responses. Importantly, GCR cannot be left ungoverned (as it currently largely is, Rhodes & Kemp, Reference Rhodes, Kemp, Beard and Hobson2024), and hazard-creating actors (Section 3.1) should be democratically reined in.

6. Conclusion

We face a complicated, complex, and dangerous global risk landscape. Some contributors to GCR are genuinely exogenous to the global system. However, many others are not: this includes some of the most salient hazards, like climate change, nuclear weapons, certain biological threats, and advanced AI. It also includes amplification, vulnerability, and latent risk. Our conceptual framework helps understand such systemic contributions to GCR and synthesises existing research on GCR with research on emergent global risk. More broadly, we have highlighted some unique challenges that systemic contributions to GCR pose for assessment and mitigation, highlighted important future research directions, and (briefly) discussed some implications for policy. While much remains to be done to assess and reduce GCR, we hope that this article will serve as a useful guidepost in such efforts.

Acknowledgements

We thank two reviewers for their helpful comments on an earlier version of this manuscript.

Author contributions

C.W.A. conceptualised and wrote the paper with input from all authors. All authors contributed to the text of the final article.

Funding statement

C.W.A. was supported in part by the Isaac Newton Trust. S.B. and K.M. were both supported by the Grantham Foundation for the Protection of the Environment. S.S.Ó.h. was supported by the Leverhulme Trust, and a donation for research from Baillie Gifford. N.W. acknowledges the Center for Critical Computational Studies (C3S) for funding support.

Competing interests

A.M. is a UKRI Policy Fellow seconded to the Department for Science, Innovation and Technology. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department for Science, Innovation and Technology or the UK Government. C.W.A., S.B., T.H., P.I., L.K, L.M, K.M., S.S.Ó.h., A.S., L.S.S., and N.W. declare no conflicts of interest.

Research transparency and reproducibility

This work generated no new data.

Footnotes

1 While ‘risk’ may be conceived of in a number of different ways (SRA, 2018), here we follow previous work on GCR in focusing on the potential occurrence of certain undesirable outcomes.

2 Although specifics vary, this is the general idea underpinning much of the scholarship on existential risk to humanity (Greaves, Reference Greaves2024). This means that even defining existential risk requires one to make value judgments: how bad are different possible outcomes when compared to human extinction? In this article, we will avoid the term ‘existential risk’, focusing instead on GCR, extinction risk, and collapse risk. For discussions of some of the normative issues from different perspectives, we refer the reader to works by Ord (Reference Ord2020), Cremer and Kemp (Reference Cremer, Kemp, Beard and Hobson2024), and Greaves (Reference Greaves2024). For a discussion of the concept of existential risk applied at different scales [e.g. individuals, communities, or states], see Huggel et al. (Reference Huggel, Bouwer, Juhola, Mechler, Muccione, Orlove and Wallimann-Helmer2022).

3 For an overview of this work and how it has developed over time, see Beard and Hobson (Reference Beard and Hobson2024) and Jehn et al. (Reference Jehn, Engler, Arnscheidt, Wache, Ilin, Cook, Sundaram, Hanusch and Kemp2024).

4 Throughout this article, we use the term ‘emergent’ primarily in this sense: to refer to phenomena that arise at the level of an entire system due to behaviours and interactions occurring at a smaller scale within the system.

5 There are a small number of exceptions: one example is the risk posed by naturally occurring exotic physics scenarios such as vacuum collapse (Tegmark & Bostrom, Reference Tegmark and Bostrom2005).

6 For a critical analysis of how the ‘tipping point’ concept has been used in different communities over time, see Van der Hel et al. (Reference Van der Hel, Hellsten and Steen2018); for a recent critique of how it has been applied in social contexts, see Milkoreit (Reference Milkoreit2023).

7 We understand a ‘conceptual framework’ to be a set of ideas that connects different concepts in order to facilitate understanding and structure inquiry (Cumming, Reference Cumming, Sakai and Umetsu2014; Jabareen, Reference Jabareen2009; Partelow, Reference Partelow2023), with much of the value of a framework lying in what its goals are and how well they are achieved (Cumming, Reference Cumming, Sakai and Umetsu2014).

8 Additionally, the concept of exposure refers to the interface (or ‘reaction surface’) between hazard and vulnerability. Exposure is an important concept for natural disasters because those tend to be localised: for example, earthquakes have very different impacts depending on how close they occur to human settlements. However, in the case of global catastrophic risk, there is often a complicated interplay between vulnerability and exposure: for example, global air traffic networks, which could be interpreted as a vulnerability, can cause a localised epidemic (local exposure) to become a global pandemic (global exposure). For the purposes of this article, we take a broad view of vulnerability in which it includes those properties of systems which can transform exposure from local to global.

9 By ‘AI arms race’, we mean the real or perceived race for technological superiority with respect to AI (Baum, Reference Baum2020; Cave & ÓhÉigeartaigh, Reference Cave and Ó héigeartaigh2018; de Neufville & Baum, Reference de Neufville and Baum2021); this does not necessarily imply military usage.

10 Specifically, it depicts amplification as occurring once the system passes across the top of the hill, which represents an unstable equilibrium state. This is technically an instance of ‘noise-induced’ (Ashwin et al., Reference Ashwin, Wieczorek, Vitolo and Cox2012) or ‘shock-induced’ (Feudel, Reference Feudel2023) tipping. For an accessible explanation of different kinds of tipping in terms of stability landscapes, see Lenton (Reference Lenton, Centeno, Callahan, Larcey and Patterson2023).

11. This is not to suggest that the maintenance of the global system in its current state is normatively desirable. However, a failure of this system to provide for large-scale human subsistence, with no immediate alternatives available, would result in death and suffering on a vast scale.

12 We note that there are other factors that point in the opposite direction: for example, metals would be much more easily accessible in a post-collapse world than earlier in Earth’s history because they could be scavenged from the ruins of cities (Baum et al., Reference Baum, Armstrong, Ekenstedt, Häggström, Hanson, Kuhlemann, Maas, Miller, Salmela, Sandberg, Sotala, Torres, Turchin and Yampolskiy2019; Belfield, Reference Belfield, Centeno, Callahan, Larcey and Patterson2023).

13 For a brief discussion of minimum viable human population sizes in the context of a post-collapse world, see Baum et al. (Reference Baum, Armstrong, Ekenstedt, Häggström, Hanson, Kuhlemann, Maas, Miller, Salmela, Sandberg, Sotala, Torres, Turchin and Yampolskiy2019).

14. We note that early warning signals for tipping points have thus far often been better at retrodiction than prediction; see, e.g., Boettiger and Hastings (Reference Boettiger and Hastings2012).

References

Alexander, P., Arneth, A., Henry, R., Maire, J., Rabin, S., & Rounsevell, M. D. (2023). High energy and fertilizer prices are more damaging than food export curtailment from Ukraine and Russia for food prices, health and the environment. Nature Food, 4(1), 8495. doi:10.1038/s43016-022-00659-9CrossRefGoogle ScholarPubMed
Allen, C. R., Angeler, D. G., Garmestani, A. S., Gunderson, L. H., & Holling, C. S. (2014). Panarchy: Theory and application. Ecosystems, 17(4), 578589. doi:10.1007/s10021-013-9744-2CrossRefGoogle Scholar
Amadae, S. M. (2015). Prisoners of reason: Game theory and neoliberal political economy. Cambridge University Press. doi: 10.1017/CBO9781107565258CrossRefGoogle Scholar
Armstrong McKay, D. I., Staal, A., Abrams, J. F., Winkelmann, R., Sakschewski, B., Loriani, S., Fetzer, I., Cornell, S. E., Rockström, J., & Lenton, T. M. (2022). Exceeding 1.5° C global warming could trigger multiple climate tipping points. Science, 377(6611), eabn7950. doi:10.1126/science.abn7950CrossRefGoogle Scholar
Arnscheidt, C. W., & Rothman, D. H. (2022). Rate-induced collapse in evolutionary systems. Journal of the Royal Society Interface, 19(191), 20220182. doi:10.1098/rsif.2022.0182CrossRefGoogle ScholarPubMed
Arthur, W. B. (2021). Foundations of complexity economics. Nature Reviews Physics, 3(2), 136145. doi:10.1038/s42254-020-00273-3CrossRefGoogle ScholarPubMed
Ashwin, P., Wieczorek, S., Vitolo, R., & Cox, P. (2012). Tipping points in open systems: Bifurcation, noise-induced and rate-dependent examples in the climate system. Phil. Trans. R. Soc. A, 370(1962), 11661184. doi:10.1098/rsta.2011.0306CrossRefGoogle ScholarPubMed
Avin, S., Wintle, B. C., Weitzdörfer, J., Ó hÉigeartaigh, S. S., Sutherland, W. J., & Rees, M. J. (2018). Classifying global catastrophic risks. Futures, 102, 2026. doi: 10.1016/j.futures.2018.02.001CrossRefGoogle Scholar
Baker, R. E., Mahmud, A. S., Miller, I. F., Rajeev, M., Rasambainarivo, F., Rice, B. L., Takahashi, S., Tatem, A. J., Wagner, C. E., Wang, L.-F., Wesolowski, A., & Metcalf, C. J. E. (2022). Infectious disease in an era of global change. Nature Reviews, Microbiology, 20(4), 193205. doi:10.1038/s41579-021-00639-zCrossRefGoogle Scholar
Barnosky, A. D., Hadly, E. A., Bascompte, J., Berlow, E. L., Brown, J. H., Fortelius, M., Getz, W. M., Harte, J., Hastings, A., Marquet, P. A., Martinez, N. D., Mooers, A., Roopnarine, P., Vermeij, G., Williams, J. W., Gillespie, R., Kitzes, J., Marshall, C., Matzke, N., and Smith, A. B. (2012). Approaching a state shift in Earth's biosphere. Nature, 486(7401), 5258. doi:10.1038/nature11018CrossRefGoogle ScholarPubMed
Barrett, S. (2016). Collective action to avoid catastrophe: When countries succeed, when they fail, and why. Global Policy, 7(S1), 4555. doi:10.1111/1758-5899.12324CrossRefGoogle Scholar
Baum, S., & Barrett, A. (2018). A model for the impacts of nuclear war. Global Catastrophic Risk Institute Working Paper 18-2. https://gcrinstitute.org/a-model-for-the-impacts-of-nuclear-war/.Google Scholar
Baum, S. D. (2020). Medium-term artificial intelligence and society. Information, 11(6), 290. doi:10.3390/info11060290CrossRefGoogle Scholar
Baum, S. D. (2023). Assessing natural global catastrophic risks. Natural Hazards, 115(3), 26992719. doi:10.1007/s11069-022-05660-wCrossRefGoogle ScholarPubMed
Baum, S. D., Armstrong, S., Ekenstedt, T., Häggström, O., Hanson, R., Kuhlemann, K., Maas, M. M., Miller, J. D., Salmela, M., Sandberg, A., Sotala, K., Torres, P., Turchin, A., & Yampolskiy, R. V. (2019). Long-term trajectories of human civilization. Foresight, 21(1), 5383. doi:10.1108/FS-04-2018-0037CrossRefGoogle Scholar
Beard, S., & Hobson, T. (2024). An anthology of global risk. Open Book Publishers. doi:10.11647/OBP.0360CrossRefGoogle Scholar
Beard, S., Holt, L., Tzachor, A., Kemp, L., Avin, S., Torres, P., & Belfield, H. (2021). Assessing climate change's contribution to global catastrophic risk. Futures, 127, 102673. doi:10.1016/j.futures.2020.102673CrossRefGoogle Scholar
Beard, S., Rowe, T., & Fox, J. (2020). An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards. Futures, 115, 102469. doi:10.1016/j.futures.2019.102469CrossRefGoogle Scholar
Belfield, H. (2023). Collapse, recovery, and existential risk. In How worlds collapse: What history, systems, and complexity can teach us about our modern world and fragile future. In Centeno, M. A., Callahan, P. W., Larcey, P. A., Patterson, T. S. eds. Routledge. 6192.CrossRefGoogle Scholar
Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., Choi, Y., Fox, P., Garfinkel, B., Goldfarb, D., Heidari, H., Ho, A., Kapoor, S., Khalatbari, L., Longpre, S., Manning, S., Mavroudis, V., Mazeika, M., Michael, J., & Zeng, Y. (2025). International AI Safety Report (DSIT 2025/001). https://www.gov.uk/government/publications/international-ai-safety-report-2025.Google Scholar
Blake, J. S., & Gilman, N. (2024). Children of a modest star: Planetary thinking for an age of crises. Stanford University Press.CrossRefGoogle Scholar
Boettiger, C., & Hastings, A. (2012). Early warning signals and the prosecutor's fallacy. Proceedings of the Royal Society B: Biological Sciences, 279(1748), 47344739. doi:10.1098/rspb.2012.2085CrossRefGoogle ScholarPubMed
Bostrom, N., & Cirkovic, M. M. (2008). Global catastrophic risks. Oxford University Press. doi:10.1093/oso/9780198570509.001.0001CrossRefGoogle Scholar
Brandt, J., Kreps, S., Meserole, C., Singh, P., & Sisson, M. W. (2022). Succeeding in the AI competition with China [Report]. Brookings Institution https://www.brookings.edu/articles/succeeding-in-the-ai-competition-with-china-a-strategy-for-action/.Google Scholar
Brozović, D. (2023). Societal collapse: A literature review. Futures, 145, 103075. doi:10.1016/j.futures.2022.103075CrossRefGoogle Scholar
Butzer, K. W. (2012). Collapse, environment, and society. Proceedings of the National Academy of Sciences, 109(10), 36323639. doi:10.1073/pnas.1114845109CrossRefGoogle Scholar
Carbon Majors. (2024). The Carbon Majors database: Launch report [Report]. Carbon Majors. https://carbonmajors.org/briefing/The-Carbon-Majors-Database-26913.Google Scholar
Cassidy, M., & Mani, L. (2022). Prepare now for big eruptions. Nature, 608(7923), 469471. doi:10.1038/d41586-022-02177-xCrossRefGoogle Scholar
Cave, S., & Ó héigeartaigh, S. S. (2018). An AI race for strategic advantage: Rhetoric and risks. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 3640. doi:10.1145/3278721.327878CrossRefGoogle Scholar
Centeno, M. A., Callahan, P. W., Larcey, P. A., & Patterson, T. S. (2023). How worlds collapse: What history, systems, and complexity can teach us about our modern world and fragile future. Routledge.CrossRefGoogle Scholar
Centeno, M. A., Nag, M., Patterson, T. S., Shaver, A., & Windawi, A. J. (2015). The emergence of global systemic risk. Annual Review of Sociology, 41(1), 6585. doi:10.1146/annurev-soc-073014-112317CrossRefGoogle Scholar
Centeno, M., Callahan, P., Larcey, P., & Patterson, T. (2022). Globalization as adaptive complexity: Learning from failure. In Perspectives on public policy in societal-environmental crises: What the future needs from history. In Izdebski, A., Haldon, J., Filipkowski, P. eds. (pp. 5974). Springer International Publishing.doi:10.1007/978-3-030-94137-6_6CrossRefGoogle Scholar
Chapman, M., Klassen, S., Kreitzman, M., Semmelink, A., Sharp, K., Singh, G., & Chan, K. M. (2017). 5 key challenges and solutions for governing complex adaptive (food) systems. Sustainability, 9(9), 1594. doi:10.3390/su9091594CrossRefGoogle Scholar
Chase-Dunn, C., & Grimes, P. (1995). World-systems analysis. Annual Review of Sociology, 21(1), 387417. doi:10.1146/annurev.so.21.080195.002131CrossRefGoogle Scholar
Clapp, J. (2023). Concentration and crises: Exploring the deep roots of vulnerability in the global industrial food system. The Journal of Peasant Studies, 50(1), 125. doi:10.1080/03066150.2022.2129013CrossRefGoogle Scholar
Clapp, J., & Moseley, W. G. (2020). This food crisis is different: COVID-19 and the fragility of the neoliberal food security order. The Journal of Peasant Studies, 47(7), 13931417. doi:10.1080/03066150.2020.1823838CrossRefGoogle Scholar
Cotton-Barratt, O., Daniel, M., & Sandberg, A. (2020). Defence in depth against human extinction: Prevention, response, resilience, and why they all matter. Global Policy, 11(3), 271282. doi:10.1111/1758-5899.12786CrossRefGoogle ScholarPubMed
Cotton-Barratt, O., Farquhar, S., Halstead, J., Schubert, S., & Snyder-Beattie, A. (2016). Global catastrophic risks [Report]. Global Challenges Foundation https://globalchallenges.org/library/global-catastrophic-risks-2016/.Google Scholar
Cremer, C. Z., & Kemp, L. (2024). Democratising risk: In search of a methodology to study existential risk. In Beard, S. J. & Hobson, T. (Eds.), An anthology of global risk (pp. 75124). Open Book Publishers.CrossRefGoogle Scholar
Cremer, C. Z., & Whittlestone, J. (2021). Artificial canaries: Early warning signs for anticipatory and democratic governance of AI. International Journal of Interactive Multimedia and Artificial Intelligence 6(5), 100109. doi:10.9781/ijimai.2021.02.011CrossRefGoogle Scholar
Cumming, G. S. (2014). Theoretical frameworks for the analysis of social–ecological systems. In Sakai, S., and Umetsu, C. (Eds.), Social-ecological systems in transition (pp. 324). Springer. doi:10.1007/978-4-431-54910-9_1.CrossRefGoogle Scholar
Cumming, G. S., & Peterson, G. D. (2017). Unifying research on social–ecological resilience and collapse. Trends in Ecology and Evolution, 32(9), 695713. doi:10.1016/j.tree.2017.06.014CrossRefGoogle ScholarPubMed
Currie, A. (2019). Existential risk, creativity & well-adapted science. Studies in History and Philosophy of Science Part A, 76, 3948. doi:10.1016/j.shpsa.2018.09.008CrossRefGoogle ScholarPubMed
de Neufville, R., & Baum, S. D. (2021). Collective action on artificial intelligence: A primer and review. Technology in Society, 66, 101649. doi:10.1016/j.techsoc.2021.101649CrossRefGoogle Scholar
Dietz, T., Ostrom, E., & Stern, P. C. (2003). The struggle to govern the commons. Science, 302(5652), 19071912. doi:10.1126/science.1091015CrossRefGoogle ScholarPubMed
D'Odorico, P., Davis, K. F., Rosa, L., Carr, J. A., Chiarelli, D., Dell'Angelo, J., Gephart, J., MacDonald, G. K., Seekell, D. A., Suweis, S., & Rulli, M. C. (2018). The global food-energy-water nexus. Reviews of Geophysics, 56(3), 456531. doi:10.1029/2017RG000591CrossRefGoogle Scholar
Duit, A., & Galaz, V. (2008). Governance and complexity—Emerging issues for governance theory. Governance, 21(3), 311335. doi:10.1111/j.1468-0491.2008.00402.xCrossRefGoogle Scholar
Dziczek, K. (2022). Why the automotive chip crisis isn't over (yet). Chicago Fed Letter, 473. doi:10.21033/cfl-2022-473Google Scholar
EBRC. (2023). Security considerations at the intersection of engineering biology and artificial intelligence [Report]. Engineering Biology Research Consortium doi:10.25498/E4J017CrossRefGoogle Scholar
Ellis, E. C. (2015). Ecology in an anthropogenic biosphere. Ecological Monographs, 85(3), 287331. doi:10.1890/14-2274.1CrossRefGoogle Scholar
Evenson, R. E., & Gollin, D. (2003). Assessing the impact of the Green Revolution, 1960 to 2000. Science, 300(5620), 758762. doi:10.1126/science.1078710CrossRefGoogle ScholarPubMed
Farmer, J. D., Hepburn, C., Ives, M. C., Hale, T., Wetzer, T., Mealy, P., Rafaty, R., Srivastav, S., & Way, R. (2019). Sensitive intervention points in the post-carbon transition. Science, 364(6436), 132134. doi:10.1126/science.aaw7287CrossRefGoogle ScholarPubMed
Feudel, U. (2023). Rate-induced tipping in ecosystems and climate: The role of unstable states, basin boundaries and transient dynamics. Nonlinear Processes in Geophysics, 30(4), 481502. doi:10.5194/npg-30-481-2023CrossRefGoogle Scholar
Fisher, L., & Sandberg, A. (2022). A safe governance space for humanity: Necessary conditions for the governance of Global Catastrophic Risks. Global Policy, 13(5), 792807. doi:10.1111/1758-5899.13030CrossRefGoogle ScholarPubMed
Folke, C., Carpenter, S. R., Walker, B., Scheffer, M., Chapin, T., & Rockström, J. (2010). Resilience thinking: Integrating resilience, adaptability and transformability. Ecology and Society, 15(4).CrossRefGoogle Scholar
Folke, C., Carpenter, S., Walker, B., Scheffer, M., Elmqvist, T., Gunderson, L., & Holling, C. S. (2004). Regime shifts, resilience, and biodiversity in ecosystem management. Annual Review of Ecology, Evolution, and Systematics, 35(1), 557581. doi:10.1146/annurev.ecolsys.35.021103.105711CrossRefGoogle Scholar
Folke, C., Polasky, S., Rockström, J., Galaz, V., Westley, F., Lamont, M., Scheffer, M., Österblom, H., Carpenter, S. R., Chapin, F. S., Seto, K. C., Weber, E. U., Crona, B. I., Daily, G. C., Dasgupta, P., Gaffney, O., Gordon, L. J., Hoff, H., Levin, S. A., and Walker, B. H. (2021). Our future in the Anthropocene biosphere. Ambio, 50(4), 834869. doi:10.1007/s13280-021-01544-8CrossRefGoogle ScholarPubMed
Foti, N. J., Pauls, S., & Rockmore, D. N. (2013). Stability of the world trade web over time–an extinction analysis. Journal of Economic Dynamics and Control, 37(9), 18891910. doi:10.1016/j.jedc.2013.04.009CrossRefGoogle Scholar
Frank, A. B., Collins, M. G., Levin, S. A., Lo, A. W., Ramo, J., Dieckmann, U., Kremenyuk, V., Kryazhimskiy, A., Linnerooth-Bayer, J., Ramalingam, B., Roy, J. S., Saari, D. G., Thurner, S., & von Winterfeldt, D. (2014). Dealing with femtorisks in international relations. Proceedings of the National Academy of Sciences, 111(49), 1735617362. doi:10.1073/pnas.1400229111CrossRefGoogle ScholarPubMed
Frankopan, P. (2023). The earth transformed: An untold history. Bloomsbury Publishing.Google Scholar
Galaz, V., Centeno, M. A., Callahan, P. W., Causevic, A., Patterson, T., Brass, I., Baum, S., Farber, D., Fischer, J., Garcia, D., McPhearson, T., Jimenez, D., King, B., Larcey, P., & Levy, K. (2021). Artificial intelligence, systemic risks, and sustainability. Technology in Society, 67, 101741. doi:10.1016/j.techsoc.2021.101741CrossRefGoogle Scholar
Geels, F. W., & Ayoub, M. (2023). A socio-technical transition perspective on positive tipping points in climate change mitigation: Analysing seven interacting feedback loops in offshore wind and electric vehicles acceleration. Technological Forecasting and Social Change, 193, 122639. doi:10.1016/j.techfore.2023.122639CrossRefGoogle Scholar
Goldin, I., & Mariathasan, M. (2014). The butterfly defect: How globalization creates systemic risks, and what to do about it. Princeton University Press.Google Scholar
Goldin, I., & Vogel, T. (2010). Global governance and systemic risk in the 21st century: Lessons from the financial crisis. Global Policy, 1(1), 415. doi:10.1111/j.1758-5899.2009.00011.xCrossRefGoogle Scholar
Greaves, H. (2024). Concepts of existential catastrophe. The Monist, 107(2), 109129. doi:10.1093/monist/onae002CrossRefGoogle Scholar
Gruetzemacher, R., & Whittlestone, J. (2022). The transformative potential of artificial intelligence. Futures, 135, 102884. doi:10.1016/j.futures.2021.102884CrossRefGoogle Scholar
Gunderson, L. H., Allen, C. R., & Garmestani, A. (2022). Applied panarchy: Applications and diffusion across disciplines. Island Press.Google Scholar
Gunderson, L., & Holling, C. S. (Eds.). (2002). Panarchy: Understanding transformations in human and natural systems. Island Press.Google Scholar
Heede, R. (2014). Tracing anthropogenic carbon dioxide and methane emissions to fossil fuel and cement producers, 1854–2010. Climatic Change, 122(1–2), 229241. doi:10.1007/s10584-013-0986-yCrossRefGoogle Scholar
Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497(7447), 5159. doi:10.1038/nature12047CrossRefGoogle ScholarPubMed
Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An overview of catastrophic AI risks doi:. arXiv Preprint arXiv: https://arxiv.org/abs/2306.12001.Google Scholar
Herre, B., Rosado, P., Roser, M., & Hasell, J. (2024). Nuclear weapons. Our World in Data.Google Scholar
Holland, J. H. (1995). Hidden order: How adaptation builds complexity. Helix Books.Google Scholar
Holling, C. S. (1973). Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1), 123. doi:10.1146/annurev.es.04.110173.000245CrossRefGoogle Scholar
Holling, C. S., & Meffe, G. K. (1996). Command and control and the pathology of natural resource management. Conservation Biology, 10(2), 328337. doi:10.1046/j.1523-1739.1996.10020328.xCrossRefGoogle Scholar
Homer-Dixon, T. (2006). The upside of down: Catastrophe, creativity, and the renewal of civilization. Island Press.Google Scholar
Homer-Dixon, T., Walker, B., Biggs, R., Crépin, A.-S., Folke, C., Lambin, E. F., Peterson, G. D., Rockström, J., Scheffer, M., Steffen, W., & Troell, M. (2015). Synchronous failure: The emerging causal architecture of global crisis. Ecology and Society, 20(3).doi:10.5751/ES-07681-200306CrossRefGoogle Scholar
Huggel, C., Bouwer, L. M., Juhola, S., Mechler, R., Muccione, V., Orlove, B., & Wallimann-Helmer, I. (2022). The existential risk space of climate change. Climatic Change, 174(1), 8. doi:10.1007/s10584-022-03430-yCrossRefGoogle ScholarPubMed
IPCC. (2021). Climate change 2021: The physical science basis. Contribution of Working Group I to the sixth assessment report of the intergovernmental panel on climate change. In Masson-Delmotte, V., Zhai, P., Pirani, A., Connors, S. L., Péan, C., Berger, S., Caud, N., Chen, Y., Goldfarb, L., Gomis, M. I., Huang, M., Leitzell, K., Lonnoy, E., Matthews, J. B. R., Maycock, T. K., Waterfield, T., Yelekçi, O., Yu, R., Zhou, B., et al. (Eds.). Cambridge University Press. doi:10.1017/9781009157896Google Scholar
Jabareen, Y. (2009). Building a conceptual framework: Philosophy, definitions, and procedure. International Journal of Qualitative Methods, 8(4), 4962. doi:10.1177/16094069090080040CrossRefGoogle Scholar
Jacobsen, A. (2024). Nuclear war: A scenario. Dutton.Google Scholar
Jehn, F. U., Engler, J.-O., Arnscheidt, C. W., Wache, M., Ilin, E., Cook, L., Sundaram, L. S., Hanusch, F., & Kemp, L. (2024). The state of global catastrophic risk research: A bibliometric review doi:10.31223/X52X4VCrossRefGoogle Scholar
Jones, B. A., Grace, D., Kock, R., Alonso, S., Rushton, J., Said, M. Y., McKeever, D., Mutua, F., Young, J., McDermott, J., & Pfeiffer, D. U. (2013). Zoonosis emergence linked to agricultural intensification and environmental change. Proceedings of the National Academy of Sciences, 110(21), 83998404. doi:10.1073/pnas.1208059110CrossRefGoogle ScholarPubMed
Jones, N. (2023). Beyond “error and terror”: global justice and global catastrophic risk. In Beard, S. Y., Rees, M., Richards, C. & Rojas, C. R. (Eds.), The era of global risk (pp. 79100). Open Book Publishers.CrossRefGoogle Scholar
Juhola, S., Filatova, T., Hochrainer-Stigler, S., Mechler, R., Scheffran, J., & Schweizer, P.-J. (2022). Social tipping points and adaptation limits in the context of systemic risk: Concepts, models and governance. Frontiers in Climate, 4, 1009234. doi:10.3389/fclim.2022.1009234CrossRefGoogle Scholar
Kasirzadeh, A. (2025). Two types of AI existential risk: decisive and accumulative Philosophical Studies doi:10.1007/s11098-025-02301-3CrossRefGoogle Scholar
Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., Kasperson, J. X., & Ratick, S. (1988). The social amplification of risk: A conceptual framework. Risk Analysis, 8(2), 177187. doi:10.1111/j.1539-6924.1988.tb01168.xCrossRefGoogle Scholar
Kasperson, R. E., Webler, T., Ram, B., & Sutton, J. (2022). The social amplification of risk framework: New perspectives. Risk Analysis, 42(7), 13671380. doi:10.1111/risa.13926CrossRefGoogle ScholarPubMed
Kaufman, G. G., & Scott, K. E. (2003). What is systemic risk, and do bank regulators retard or contribute to it? The Independent Review, 7(3), 371391.Google Scholar
Kemp, L. (2021). Agents of Doom: Who is creating the apocalypse and why. BBC Future. https://www.bbc.com/future/article/20211014-agents-of-doom-who-is-hastening-the-apocalypse-and-whyGoogle Scholar
Kemp, L., & Cline, E. H. (2022). Systemic risk and resilience: The bronze age collapse and recovery. In Perspectives on public policy in societal-environmental crises: What the future needs from history Izdebski, A., Haldon, J., Filipkowski, P. eds. (pp. 207223). Springer International Publishing Cham.CrossRefGoogle Scholar
Kemp, L., Xu, C., Depledge, J., Ebi, K. L., Gibbins, G., Kohler, T. A., Rockström, J., Scheffer, M., Schellnhuber, H. J., Steffen, W., & Lenton, T. M. (2022). Climate endgame: Exploring catastrophic climate change scenarios. Proceedings of the National Academy of Sciences, 119(34), e2108146119. doi:10.1073/pnas.2108146119CrossRefGoogle ScholarPubMed
Keys, P. W., Galaz, V., Dyer, M., Matthews, N., Folke, C., Nyström, M., & Cornell, S. E. (2019). Anthropocene risk. Nature Sustainability, 2(8), 667673. doi:10.1038/s41893-019-0327-xCrossRefGoogle Scholar
Kreienkamp, J., & Pegram, T. (2020). Governing complexity: Design principles for the governance of complex global catastrophic risks. International Studies Review, 23(3), 779806. doi:10.1093/isr/viaa074CrossRefGoogle Scholar
Krönke, J., Wunderling, N., Winkelmann, R., Staal, A., Stumpf, B., Tuinenburg, O. A., & Donges, J. F. (2020). Dynamics of tipping cascades on complex networks. Physical Review E, 101(4), 042311. doi:10.1103/PhysRevE.101.042311CrossRefGoogle ScholarPubMed
Kruczkiewicz, A., Klopp, J., Fisher, J., Mason, S., McClain, S., Sheekh, N. M., Moss, R., Parks, R., & Braneon, C. (2021). Compound risks and complex emergencies require new approaches to preparedness. Proceedings of the National Academy of Sciences, 118(19), e2106795118. doi:10.1073/pnas.210679511810.1073/pnas.2106795118CrossRefGoogle ScholarPubMed
Kuhlemann, K. (2018). Complexity, creeping normalcy and conceit: Sexy and unsexy catastrophic risks. Foresight, 21(1), 3552. doi:10.1108/FS-05-2018-0047CrossRefGoogle Scholar
Kulveit, J., Douglas, R., Ammann, N., Turan, D., Krueger, D., & Duvenaud, D. (2025). Gradual disempowerment: Systemic existential risks from incremental AI development. arXiv Preprint arXiv. https://arxiv.org/abs/2501.16946.Google Scholar
Lawrence, M., Homer-Dixon, T., Janzwood, S., Rockstöm, J., Renn, O., & Donges, J. F. (2024). Global polycrisis: The causal mechanisms of crisis entanglement. Global Sustainability 7, 136. doi:10.1017/sus.2024.1Google Scholar
Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin.Google Scholar
Lenton, T. M. (2023). Climate change and tipping points in historical collapse. How Worlds Collapse. In Centeno, M. A., Callahan, P. W., Larcey, P. A., Patterson, T. S. eds. (pp. 261281). Routledge.CrossRefGoogle Scholar
Lenton, T. M., Benson, S., Smith, T., Ewer, T., Lanel, V., Petykowski, E., Powell, T. W., Abrams, J. F., Blomsma, F., & Sharpe, S. (2022). Operationalising positive tipping points towards global sustainability. Global Sustainability, 5, e1. doi:10.1017/sus.2021.30CrossRefGoogle Scholar
Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., & Schellnhuber, H. J. (2008). Tipping elements in the Earth's climate system. Proceedings of the National Academy of Sciences, 105(6), 17861793. doi:10.1073/pnas.0705414105CrossRefGoogle ScholarPubMed
Lenton, T. M., & Scheffer, M. (2024). Spread of the cycles: A feedback perspective on the Anthropocene. Philosophical Transactions of the Royal Society B, 379(1893), 20220254. doi:10.1098/rstb.2022.0254CrossRefGoogle ScholarPubMed
Levin, S. A. (1998). Ecosystems and the biosphere as complex adaptive systems. Ecosystems, 1(5), 431436. doi:10.1007/s100219900037CrossRefGoogle Scholar
Levin, S., Xepapadeas, T., Crépin, A.-S., Norberg, J., De Zeeuw, A., Folke, C., Hughes, T., Arrow, K., Barrett, S., Daily, G., Ehrlich, P., Kautsky, N., Mäler, K.-G., Polasky, S., Troell, M., Vincent, J. R., & Walker, B. (2013). Social-ecological systems as complex adaptive systems: Modeling and policy implications. Environment and Development Economics, 18(2), 111132. doi:10.1017/S1355770X12000460CrossRefGoogle Scholar
Liu, H.-Y., Lauta, K. C., & Maas, M. M. (2018). Governing Boring Apocalypses: A new typology of existential vulnerabilities and exposures for existential risk research. Futures, 102, 619. doi:10.1016/j.futures.2018.04.009CrossRefGoogle Scholar
Maas, M. M., Lucero-Matteucci, K., & Cooke, D. (2023). Military artificial intelligence as a constributor to global catastrophic risk. In Beard, S., Rees, M., Richards, C. & Rojas, C. R. (Eds.), The era of global risk (pp. 237284). Open Book Publishers. doi: 10.11647/OBP.0336.10CrossRefGoogle Scholar
Manheim, D. (2020). The fragile world hypothesis: Complexity, fragility, and systemic existential risk. Futures, 122, 102570. doi:10.1016/j.futures.2020.102570CrossRefGoogle Scholar
Mani, L., Erwin, D., & Johnson, L. (2023). Natural global catastrophic risks. In Beard, S., Rees, M., Richards, C. & Rojas, C. R. (Eds.), The era of global risk (pp. 123146). Open Book Publishers. doi:10.11647/OBP.0336.06CrossRefGoogle Scholar
Mani, L., Tzachor, A., & Cole, P. (2021). Global catastrophic risk from lower magnitude volcanic eruptions. Nature Communications, 12(1), 15. doi:10.1038/s41467-021-25021-8CrossRefGoogle ScholarPubMed
Marcoci, A., Wilkinson, D. P., Abatayo, A., Baskin, E., Berkman, H., Buchanan, E. M., Capitán, S., Capitán, T., Chan, G., Cheng, K. J. G., Coupé, T., Dryhurst, S., Duan, J., Edlund, J. E., Errington, T. M., Fedor, A., Fidler, F., Field, J. G., Fox, N., Fraser, H., and van der Linden, S. (2025). Predicting the replicability of social and behavioural science claims in COVID-19 preprints Nature Human Behaviour 9, 287304. doi:10.1038/s41562-024-01961-1CrossRefGoogle ScholarPubMed
May, R. M. (1977). Thresholds and breakpoints in ecosystems with a multiplicity of stable states. Nature, 269(5628), 471477. doi:10.1038/269471a0CrossRefGoogle Scholar
May, R. M., Levin, S. A., & Sugihara, G. (2008). Ecology for bankers. Nature, 451(7181), 893894. doi:10.1038/451893aCrossRefGoogle ScholarPubMed
Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.Google Scholar
Milkoreit, M. (2023). Social tipping points everywhere?—Patterns and risks of overuse. Wiley Interdisciplinary Reviews: Climate Change, 14(2), e813. doi:10.1002/wcc.813Google Scholar
Miller, J. H., & Page, S. E. (2009). Complex adaptive systems: An introduction to computational models of social life. Princeton University Press.CrossRefGoogle Scholar
Millett, P., & Snyder-Beattie, A. (2017). Existential risk and cost-effective biosecurity. Health Security, 15(4), 373383. doi:10.1089/hs.2017.0028CrossRefGoogle ScholarPubMed
Moersdorf, J., Rivers, M., Denkenberger, D., Breuer, L., & Jehn, F. U. (2023). The fragile state of industrial agriculture: Estimating crop yield reductions in a global catastrophic infrastructure loss scenario. Global Challenges, 8(1), 2300206. doi:10.1002/gch2.202300206CrossRefGoogle Scholar
Morin, E., & Kern, A. B. (1999). Homeland Earth: A manifesto for the new millenium. Hampton Press.Google Scholar
Musunuri, S., Sandbrink, J. B., Monrad, J. T., Palmer, M. J., & Koblentz, G. D. (2021). Rapid proliferation of pandemic research: Implications for dual-use risks. mBio, 12(5), 101128. doi:10.1128/mbio.01864-21CrossRefGoogle ScholarPubMed
Newman, M. (2018). Networks. Oxford University Press. doi:10.1093/oso/9780198805090.001.0001CrossRefGoogle Scholar
Nyström, M., Jouffray, J.-B., Norström, A. V., Crona, B., Søgaard Jørgensen, P., Carpenter, S. R., Bodin, Ö., Galaz, V., & Folke, C. (2019). Anatomy and resilience of the global production ecosystem. Nature, 575(7781), 98108. doi:10.1038/s41586-019-1712-3CrossRefGoogle ScholarPubMed
OECD. (2003). Emerging risks in the 21st century. doi:10.1787/9789264101227-enCrossRefGoogle Scholar
Ord, T. (2020). The precipice: Existential risk and the future of humanity. Hachette Books.Google Scholar
Oreskes, N., & Conway, E. M. (2011). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Publishing.Google Scholar
Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge University Press.CrossRefGoogle Scholar
Otto, I. M., Donges, J. F., Cremades, R., Bhowmik, A., Hewitt, R. J., Lucht, W., Rockström, J., Allerberger, F., McCaffrey, M., Doe, S. S., Lenferna, A., Morán, N., van Vuuren, D. P., & Schellnhuber, H. J. (2020). Social tipping dynamics for stabilizing Earth's climate by 2050. Proceedings of the National Academy of Sciences, 117(5), 23542365. doi:10.1073/pnas.1900577117CrossRefGoogle ScholarPubMed
Oughton, E. J., Usher, W., Tyler, P., & Hall, J. W. (2018). Infrastructure as a complex adaptive system. Complexity, 2018(1), 3427826. doi:10.1155/2018/3427826CrossRefGoogle Scholar
Parker, A., & Irvine, P. J. (2018). The risk of termination shock from solar geoengineering. Earth's Future, 6(3), 456467. doi:10.1002/2017EF000735CrossRefGoogle Scholar
Partelow, S. (2023). What is a framework? Understanding their purpose, value, development and use. Journal of Environmental Studies and Sciences, 13(3), 510519. doi:10.1007/s13412-023-00833-wCrossRefGoogle Scholar
Perrow, C. (1999). Normal accidents: Living with high risk technologies. Princeton University Press.Google Scholar
Pescaroli, G., & Alexander, D. (2018). Understanding compound, interconnected, interacting, and cascading risks: A holistic framework. Risk Analysis, 38(11), 22452257. doi:10.1111/risa.13128CrossRefGoogle ScholarPubMed
Posner, R. A. (2004). Catastrophe: Risk and response. Oxford University Press.CrossRefGoogle Scholar
Preiser, R., Biggs, R., De Vos, A., & Folke, C. (2018). Social-ecological systems as complex adaptive systems. Ecology and Society, 23(4),46. doi:10.5751/ES-10558-230446CrossRefGoogle Scholar
Puma, M. J., Bose, S., Chon, S. Y., & Cook, B. I. (2015). Assessing the evolving fragility of the global food system. Environmental Research Letters, 10(2), 024007. doi:10.1088/1748-9326/10/2/024007CrossRefGoogle Scholar
Rees, M. J. (2004). Our final century: A scientist's warning: How terror, error, and environmental disaster threaten humankind's future in this century – on earth and beyond. Arrow.Google Scholar
Renn, O., Laubichler, M., Lucas, K., Kröger, W., Schanze, J., Scholz, R. W., & Schweizer, P.-J. (2022). Systemic risks from different perspectives. Risk Analysis, 42(9), 19021920. doi:10.1111/risa.13657CrossRefGoogle ScholarPubMed
Renn, O., Lucas, K., Haas, A., & Jaeger, C. (2017). Things are different today: The challenge of global systemic risks. Journal of Risk Research, 22(4), 401415. doi:10.1080/13669877.2017.1409252CrossRefGoogle Scholar
Rhodes, C., & Kemp, L. (2024). The cartography of global catastrophic governance. In Beard, S., and Hobson, T. (Eds.), An anthology of global risk (pp. 531585). Open Book Publishers. doi:10.11647/obp.0360.19CrossRefGoogle Scholar
Richards, C., Lupton, R., & Allwood, J. M. (2021). Re-framing the threat of global warming: An empirical causal loop diagram of climate change, food insecurity and societal collapse. Climatic Change, 164(3), 119. doi:10.1007/s10584-021-02957-wCrossRefGoogle Scholar
Richardson, K., Steffen, W., Lucht, W., Bendtsen, J., Cornell, S. E., Donges, J. F., Drüke, M., Fetzer, I., Bala, G., von Bloh, W., Feulner, G., Fiedler, S., Gerten, D., Gleeson, T., Hofmann, M., Huiskamp, W., Kummu, M., Mohan, C., Nogués-Bravo, D., and Rockström, J. (2023). Earth beyond six of nine planetary boundaries. Science Advances, 9(37), eadh2458. doi:10.1126/sciadv.adh2458CrossRefGoogle ScholarPubMed
Robock, A. (2010). Nuclear winter. Wiley Interdisciplinary Reviews: Climate Change, 1(3), 418427. doi:10.1002/wcc.45Google Scholar
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin.Google Scholar
Russon, M.-A. (2021). The cost of the Suez Canal blockage. BBC. https://www.bbc.com/news/business-56559073Google Scholar
Sagan, C. (1983). Nuclear war and climatic catastrophe: Some policy implications. Foreign Affairs, 62(2), 257292. https://www.foreignaffairs.com/articles/1983-12-01/nuclear-war-and-climatic-catastrophe-some-policy-implicationsCrossRefGoogle Scholar
Sastry, G., Heim, L., Belfield, H., Anderljung, M., Brundage, M., Hazell, J., O'Keefe, C., Hadfield, G. K., Ngo, R., Pilz, K., Gor, G., Bluemke, E., Shoker, S., Egan, J., Trager, R. F., Avin, S., Weller, A., Bengio, Y., and Coyle, D. (2024). Computing power and the governance of artificial intelligence. arXiv Preprint arXiv: https://arxiv.org/abs/2402.08797.Google Scholar
Scheffer, M., Carpenter, S., Foley, J. A., Folke, C., & Walker, B. (2001). Catastrophic shifts in ecosystems. Nature, 413(6856), 591596. doi:10.1038/35098000CrossRefGoogle ScholarPubMed
Scheffer, M., Carpenter, S. R., Lenton, T. M., Bascompte, J., Brock, W., Dakos, V., Van de Koppel, J., Van de Leemput, I. A., Levin, S. A., & Van Nes, E. H. (2012). Anticipating critical transitions. Science, 338(6105), 344348. doi:10.1126/science.1225244CrossRefGoogle ScholarPubMed
Schelling, T. C. (1978). Micromotives and macrobehavior. WW Norton & Company.Google Scholar
Schoch-Spana, M., Cicero, A., Adalja, A., Gronvall, G., Kirk Sell, T., Meyer, D., Nuzzo, J. B., Ravi, S., Shearer, M. P., Toner, E., Watson, C., Watson, M., & Inglesby, T. (2017). Global catastrophic biological risks: Toward a working definition. Health Security, 15(4), 323328. doi:10.1089/hs.2017.0038CrossRefGoogle Scholar
Schweizer, P.-J. (2021). Systemic risks – Concepts and challenges for risk governance. Journal of Risk Research, 24(1), 7893. doi:10.1080/13669877.2019.1687574CrossRefGoogle Scholar
Schweizer, P.-J., & Juhola, S. (2024). Navigating systemic risks: Governance of and for systemic risks. Global Sustainability, 7, e38. doi:10.1017/sus.2024.30CrossRefGoogle Scholar
Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.Google Scholar
Scouras, J. (2019). Nuclear war as a global catastrophic risk. Journal of Benefit-Cost Analysis, 10(2), 274295. doi:10.1017/bca.2019.16CrossRefGoogle Scholar
Seto, K. C., Davis, S. J., Mitchell, R. B., Stokes, E. C., Unruh, G., & Ürge-Vorsatz, D. (2016). Carbon lock-in: Types, causes, and policy implications. Annual Review of Environment and Resources, 41(1), 425452. doi:10.1146/annurev-environ-110615-085934CrossRefGoogle Scholar
Sharpe, S., & Lenton, T. M. (2021). Upward-scaling tipping cascades to meet climate goals: Plausible grounds for hope. Climate Policy, 21(4), 421433. doi:10.1080/14693062.2020.1870097CrossRefGoogle Scholar
Siegenfeld, A. F., & Bar-Yam, Y. (2020). An introduction to complex systems science and its applications. Complexity, 2020, 116. doi:10.1155/2020/6105872CrossRefGoogle Scholar
Sillmann, J., Christensen, I., Hochrainer-Stigler, S., Huang-Lachmann, J., Juhola, S., Kornhuber, K., Mahecha, M., Mechler, R., Reichstein, M., Ruane, A. C., Schweizer, P.-J., & Williams, S. (2022). ISC-UNDRR-RISK KAN Briefing note on systemic risk. International Science Council. doi:10.24948/2022.01Google Scholar
Singh, A. (2021). Increasing global resilience to systemic risk: Emerging lessons from the COVID-19 pandemic [Report]. United Nations Office for Disaster Risk Reduction https://www.undrr.org/publication/increasing-global-resilience-systemic-risk-emerging-lessons-covid-19-pandemic.Google Scholar
Slovic, P. (1987). Perception of risk. Science, 236(4799), 280285. doi:10.1126/science.3563507CrossRefGoogle ScholarPubMed
Smil, V. (2004). Enriching the earth: Fritz Haber, Carl Bosch, and the transformation of world food production. MIT Press.Google Scholar
Smil, V. (2022). How the world really works: A scientist's guide to our past, present and future. Penguin UK.Google Scholar
Snowden, D. J., & Boone, M. E. (2007). A leader's framework for decision making. Harvard Business Review, 85(11), 68.Google ScholarPubMed
Søgaard Jørgensen, P., Jansen, R. E., Avila Ortega, D. I., Wang-Erlandsson, L., Donges, J. F., Österblom, H., Olsson, P., Nyström, M., Lade, S. J., Hahn, T., Folke, C., Peterson, G. D., & Crépin, A.-S. (2024). Evolution of the polycrisis: Anthropocene traps that challenge global sustainability. Philosophical Transactions of the Royal Society B, 379(1893), 20220261. doi:10.1098/rstb.2022.0261CrossRefGoogle ScholarPubMed
Spaiser, V., Juhola, S., Constantino, S. M., Guo, W., Watson, T., Sillmann, J., Craparo, A., Basel, A., Bruun, J. T., Krishnamurthy, K., Scheffran, J., Pinho, P., Okpara, U. T., Donges, J. F., Bhowmik, A., Yasseri, T., Safra de Campos, R., Cumming, G. S., Chenet, H., and Spears, B. M. (2024). Negative social tipping dynamics resulting from and reinforcing Earth system destabilization. Earth System Dynamics, 15(5), 11791206. doi:10.5194/esd-15-1179-2024CrossRefGoogle Scholar
SRA. (2018). Society for risk analysis glossary. doi:10.24948/2022.01CrossRefGoogle Scholar
Steffen, W., Richardson, K., Rockström, J., Schellnhuber, H. J., Dube, O. P., Dutreuil, S., Lenton, T. M., & Lubchenco, J. (2020). The emergence and evolution of Earth System Science. Nature Reviews Earth & Environment, 1(1), 5463. doi:10.1038/s43017-019-0005-6CrossRefGoogle Scholar
Stephens, P. A., & Sutherland, W. J. (1999). Consequences of the Allee effect for behaviour, ecology and conservation. Trends in Ecology and Evolution, 14(10), 401405. doi:10.1016/S0169-5347(99)01684-5CrossRefGoogle ScholarPubMed
Sundaram, L. S. (2023). Existential Risk and Science Governance. In Beard, S., Rees, M., Richards, C. & Rojas, C. R. (Eds.), The era of global risk (pp. 5578). Open Book Publishers. doi:10.11647/OBP.0336.03CrossRefGoogle Scholar
Supran, G., Rahmstorf, S., & Oreskes, N. (2023). Assessing ExxonMobil's global warming projections. Science, 379(6628), eabk0063. doi:10.1126/science.abk0063CrossRefGoogle ScholarPubMed
Tainter, J. (1988). The collapse of complex societies. Cambridge University Press.Google Scholar
Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random House.Google Scholar
Tang, A., & Kemp, L. (2021). A fate worse than warming? Stratospheric aerosol injection and global catastrophic risk. Frontiers in Climate 3, 720312. doi:10.3389/fclim.2021.720312CrossRefGoogle Scholar
Tegmark, M., & Bostrom, N. (2005). Is a doomsday catastrophe likely? Nature, 438(7069), 754754. doi:10.1038/438754aCrossRefGoogle ScholarPubMed
Tooze, A. (2021). Shutdown: How Covid shook the world's economy. Penguin UK.Google Scholar
Tzachor, A., Devare, M., King, B., Avin, S., & Ó hÉigeartaigh, S. S. (2022). Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities. Nature Machine Intelligence, 4(2), 104109. doi:10.1038/s42256-022-00440-4CrossRefGoogle Scholar
Tzachor, A., Richards, C. E., & Holt, L. (2021). Future foods for risk-resilient diets. Nature Food, 2(5), 326329. doi:10.1038/s43016-021-00269-xCrossRefGoogle ScholarPubMed
UNDRR. (2019). Global assessment report on disaster risk reduction [Report]. United Nations Office for Disaster Risk Reduction https://www.undrr.org/publication/global-assessment-report-disaster-risk-reduction-2019.Google Scholar
Van der Hel, S., Hellsten, I., & Steen, G. (2018). Tipping points and climate change: Metaphor between science and the media. Environmental Communication, 12(5), 605620. doi:10.1080/17524032.2017.1410198CrossRefGoogle Scholar
Walker, B., Crépin, A.-S., Nyström, M., Anderies, J. M., Andersson, E., Elmqvist, T., Queiroz, C., Barrett, S., Bennett, E., Cardenas, J. C., Carpenter, S. R., Chapin, F. S., de Zeeuw, A., Fischer, J., Folke, C., Levin, S., Nyborg, K., Polasky, S., Segerson, K., and Vincent, J. R. (2023). Response diversity as a sustainability strategy. Nature Sustainability, 6(6), 621629. doi:10.1038/s41893-022-01048-7CrossRefGoogle Scholar
Walker, B., Holling, C. S., Carpenter, S. R., & Kinzig, A. (2004). Resilience, adaptability and transformability in social–ecological systems. Ecology and Society, 9(2).CrossRefGoogle Scholar
Walker, W. E., Lempert, R. J., & Kwakkel, J. H. (2013). Deep uncertainty. In Gass, S. I. & Fu, M. C. (Eds.), Encyclopedia of operations research and management science (pp. 395402). Springer US. doi:10.1007/978-1-4419-1153-7_1140CrossRefGoogle Scholar
Waltz, K., & Sagan, S. (1995). The spread of nuclear weapons: A debate. W. W. Norton.Google Scholar
Wiener, J. B. (2016). The tragedy of the uncommons: On the politics of apocalypse. Global Policy, 7(S1), 6780. doi:10.1111/1758-5899.12319CrossRefGoogle Scholar
Williams, M., Zalasiewicz, J., Haff, P., Schwägerl, C., Barnosky, A. D., & Ellis, E. C. (2015). The Anthropocene biosphere. The Anthropocene Review, 2(3), 196219. doi:10.1177/2053019615591020CrossRefGoogle Scholar
Winkelmann, R., Donges, J. F., Smith, E. K., Milkoreit, M., Eder, C., Heitzig, J., Katsanidou, A., Wiedermann, M., Wunderling, N., & Lenton, T. M. (2022). Social tipping processes towards climate action: A conceptual framework. Ecological Economics, 192, 107242. doi:10.1016/j.ecolecon.2021.107242CrossRefGoogle Scholar
Xia, L., Robock, A., Scherrer, K., Harrison, C. S., Bodirsky, B. L., Weindl, I., Jägermeyr, J., Bardeen, C. G., Toon, O. B., & Heneghan, R. (2022). Global food insecurity and famine from reduced crop, marine fishery and livestock production due to climate disruption from nuclear war soot injection. Nature Food, 3(8), 586596. doi:10.1038/s43016-022-00573-0CrossRefGoogle ScholarPubMed
Yang, V. C., & Sandberg, A. (2023). Collective intelligence as infrastructure for reducing broad global catastrophic risks. In & (Eds.), Intersections, Reinforcements, Cascades. Proceedings of the 2023 Stanford Existential Risks Conference (pp. 194206). doi:10.25740/mf606ht6373CrossRefGoogle Scholar
Young, O. R., Berkhout, F., Gallopin, G. C., Janssen, M. A., Ostrom, E., & Van der Leeuw, S. (2006). The globalization of socio-ecological systems: An agenda for scientific research. Global Environmental Change, 16(3), 304316. doi:10.1016/j.gloenvcha.2006.03.004CrossRefGoogle Scholar
Yudkowsky, E. (2008). Cognitive biases potentially affecting judgment of global risks. In Bostrom, N. & Cirkovic, M. M. (Eds.), Global catastrophic risks (pp. 91119). Oxford University Press.Google Scholar
Zhou, L., Moreno-Casares, P. A., Martínez-Plumed, F., Burden, J., Burnell, R., Cheke, L., Ferri, C., Marcoci, A., Mehrbakhsh, B., Moros-Daval, Y., Ó hÉigeartaigh, S. S., Rutar, D., Schellaert, W., Voudouris, K., & Hernández-Orallo, J. (2024). Predictable artificial intelligence. https://arxiv.org/abs/2310.06167Google Scholar
Zscheischler, J., Martius, O., Westra, S., Bevacqua, E., Raymond, C., Horton, R. M., van den Hurk, B., AghaKouchak, A., Jézéquel, A., Mahecha, M. D., Maraun, D., Ramos, A. M., Ridder, N. N., Thiery, W., & Vignotto, E. (2020). A typology of compound weather and climate events. Nature Reviews Earth & Environment, 1(7), 333347. doi:10.1038/s43017-020-0060-zCrossRefGoogle Scholar
Zscheischler, J., Westra, S., Van Den Hurk, B. J., Seneviratne, S. I., Ward, P. J., Pitman, A., AghaKouchak, A., Bresch, D. N., Leonard, M., Wahl, T., & Zhang, X. (2018). Future climate risk from compound events. Nature Climate Change, 8(6), 469477. doi:10.1038/s41558-018-0156-3CrossRefGoogle Scholar
Figure 0

Figure 1. Summary of destabilising (mathematically positive) and stabilising (mathematically negative) feedbacks. In causal loop diagrams (top row), an arrow with a + symbol means that a change in the first variable causes the second variable to change in the same direction (e.g. an increase in A causes an increase in B), and an arrow with a − symbol means that a change in the first variable causes the second variable to change in the opposite direction (e.g. an increase in A causes a decrease in B). In stability landscape diagrams (bottom row), the state of the system is conceived of as a ball rolling on a landscape (collapsing the high-dimensional state spaces of the real world onto a single dimension). When destabilising feedbacks dominate, we see runaway change (rolling down the hill); when stabilising feedbacks dominate, the system remains within a stable equilibrium (the valley, or ‘basin of attraction’).

Figure 1

Figure 2. Key elements of our conceptual framework for understanding systemic contributions to global catastrophic risk. Hazards, whether from outside of the global system (e.g. asteroids and volcanic eruptions) or emerging within the global system (Section 3.1; nuclear weapons are one example), can interact with vulnerabilities (Section 3.3) to produce GCR. A key component of the interaction between hazards and vulnerabilities is amplification (Section 3.2). Finally, latent risk (Section 3.4) is a risk that may be generated by present-day phenomena but only becomes active in certain future system states: this may be particularly important in the aftermath of a global catastrophe. An important point is that each of these four phenomena (hazards, vulnerability, amplification, and latent risk) is in large part emergent from the global system.

Figure 2

Figure 3. Amplification in the context of global catastrophic risk. Hazards (modulated by exposure) threaten the system's persistence in its current basin of attraction and can set in motion runaway evolution towards global catastrophic outcomes (amplification). The broad notion of vulnerability relates to multiple things here: how deep are the two basins of attraction, and how bad is the global catastrophic outcome? We emphasise that this picture is vastly oversimplified (see Section 3.2), but it captures important elements of the problem.