Hostname: page-component-7dd5485656-s6l46 Total loading time: 0 Render date: 2025-10-29T05:50:47.905Z Has data issue: false hasContentIssue false

The competence–control trade-off in military AI innovation: Autonomous weapons systems and shifting modes of state control over private experts

Published online by Cambridge University Press:  29 October 2025

Andrea Johansen
Affiliation:
Department of Political Science, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
Andreas Kruck*
Affiliation:
Department of Political Science, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
*
Corresponding author: Andreas Kruck; Email: andreas.kruck@gsi.uni-muenchen.de
Rights & Permissions [Opens in a new window]

Abstract

Developments in military AI highlight that maintaining state control over military innovation that is driven by private corporate experts is challenging. Even the leading military AI power, the United States, has struggled to meet this challenge, while trying different modes of control over time. What explains these struggles and the shifting modes of state control over private military innovation? Bringing together the ‘competence–control theory’ of indirect governance and research on technology-driven transformations in the making of national security, we propose a novel theory of state control over military innovation. We argue that states face a trade-off between (fostering) the competence of private corporate experts and (enhancing) state control over military innovation. This trade-off is shaped by technological change and geopolitical competition. Depending on the relative strength of these drivers, varying prioritisations of competence or control lead to different hierarchical or non-hierarchical, capacities- or rules-based modes of control. Tracing the evolution of the US national security state and its relations to private corporate experts in the subfield of autonomous weapons systems, we demonstrate that our theoretical argument explains otherwise puzzling intertemporal variation in control modes. Our findings have important policy implications for the institutional design of military innovation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The British International Studies Association.

Introduction

Artificial intelligence (AI)Footnote 1 enabled technologies have increasingly been adopted for military purposes, including intelligence, data analysis, decision-making, and planning support, as well as movement and targeting on the battlefield. While they provide states with potent tools to gain an edge in geopolitical competition,Footnote 2 they also pose challenges to states.Footnote 3 A particularly pertinent challenge is to ensure state control over military innovation, understood as research, development, procurement, adoption, and operational use of a new technology such as AI,Footnote 4 as most innovation is driven by private corporate experts.Footnote 5

Established theories expect different modes of state control over private corporate experts. State-centred approaches,Footnote 6 informed by realist or bureaucratic politics theories of security policy-making,Footnote 7 claim that state policymakers will seek hierarchical control. They will build centralised state capacities for innovation, thereby reducing dependence on private corporate experts and ensuring state control over military innovation. By contrast, market-centred approaches, drawing on liberal or political economy theories of security policy-making,Footnote 8 hold that state policymakers face strong incentives to foster and harness private innovation capacity while choosing soft and non-intrusive modes of regulatory control. Finally, institution-centred approaches expect states to settle, at least in the medium term, for a particular control mode that is in line with prevailing – state- or market-centred – institutional legacies in the respective state and sector.Footnote 9

The control modes that the leading military AI power, the United States, has established belie all three expectations. The United States has engaged in recurring dissatisfaction-driven reforms that have led to different, partly contradictory modes of – hierarchical or non-hierarchical, capacities- or rules-based – control over time. While some institutions for military (AI) innovation such as the Defense Advanced Research Projects Agency (DARPA, established in 1958) or the Defense Innovation Unit (DIU, 2015) have persisted, the prevailing modes of state control employed by the institutions of the US security state have shifted, in the past decades, from 1) collaborative capacity-building with private corporate experts, to 2) soft regulation of private innovation, to 3) attempts at centralised capacity-building within the security state, and to 4) hard regulation of private corporate experts. What explains these shifting modes of state control over military innovation driven by private corporate experts?

Bringing together the ‘competence–control theory’ of indirect governanceFootnote 10 and research on technology-driven transformations in security policy-making,Footnote 11 we propose a novel theory of state control over military innovation. We argue that policymakers face a trade-off between (fostering) the competence of private corporate experts and (enhancing) state control over military innovation. On the one hand, the state is dependent on private expertise for military AI innovation, which suggests fostering unrestrained private competence. On the other hand, there are strong incentives for state control, since military AI is a vital sector for national security, and many applications involve high security risks and significant political costs if left uncontrolled. Yet, maximising both private competence and state control at the same time is impossible. How policymakers navigate this competence–control trade-off (CC trade-off) is shaped by technological change and geopolitical competition. Depending on the (relative) strength of these drivers, varying prioritisations of competence or control lead to different modes of control. Yet, due to the persistent CC trade-off, we do not see the emergence of institutional equilibria but rather recurring dissatisfaction-driven reforms.

Our theory explains alternating modes of state control that remain puzzling for state-, market-, or institution-centred approaches. We contribute to research on technology-driven transformations in security policy-makingFootnote 12 and the emergence of different types of security statesFootnote 13 by pointing out how the CC trade-off complicates policymakers’ settling on any stable configuration of control instruments in the evolving national security state. We also advance theories of indirect governanceFootnote 14 by specifying how technological change interacts with geopolitical competition to shape indirect governance modes for dual-use technologies. The United States is a typical, and thus ideal, case to demonstrate the empirical operation of our novel theory, as both the incentives for competence and control should be strong in the US case. Yet, we expect some CC trade-off to apply to all advanced democratic military powers, as (long as) they have the aspirations and the wherewithal to both foster private competence and ensure state control. Only if a state is so lacking in private competence or so disinterested in state control that it sees no point in pursuing it does the fundamental trade-off disappear.

In the following, we first conceptualise different modes of state control over private-driven military innovation. We then theorise how technological change and geopolitical competition shape the CC trade-off and the ensuing choice of control modes. In a process-tracing analysis,Footnote 15 we retrace the evolution of modes of control over military AI innovation in the US security state. We focus on autonomous weapons systems (AWS), which have been at the centre of the debate about military AI.Footnote 16 After demonstrating that our theory provides a better explanation than alternative institutional field and domestic politics approaches, we conclude with some avenues for further research and important policy implications.

The competence–control trade-off in military innovation

Conceptualising modes of control

Our dependent variable is modes of state control over military innovation. States ultimately seek reliable access to security-relevant technology, while denying it to rivals. In sectors such as AI where private corporate experts play a major role in innovation, states have control over military innovation when they prevent shirking (i.e., subpar efforts) and slippage (i.e., undesired development or use of security-relevant technology) among private innovators. There are various ways whereby states may pursue these goals, ranging from insourcing the expertise for military innovation to nurturing good rapport with private corporate experts so that the latter identify with state interests.

To distinguish between varying modes of control, research on indirect governance has highlighted that ‘governors’ may employ hierarchical or non-hierarchical modes of control for their ‘intermediaries’.Footnote 17 Hierarchical control involves hard, top-down oversight. It works with (threats of) coercion and sanctions. By contrast, non-hierarchical control relies on mutually beneficial collaboration, political, economic, or ideational inducements, soft (self-)regulation,Footnote 18 and/or nudging. It works with positive incentives and aims at intermediaries’ voluntary alignment with the governor’s goals.

Modes of state control may also vary with regard to the policy instrument employed to exercise control. Drawing on concepts from public policy research,Footnote 19 which have recently been introduced to the study of security policy-making,Footnote 20 we distinguish between capacities- and rules-based modes. ‘Positive security states’ build and employ state capacities,Footnote 21 including administrative capacities and coercive capacities,Footnote 22 to ensure state control over military innovation. They spend state funding on building and maintaining internal capacities for military innovation within the state apparatus. Unlike positive security states, ‘regulatory security states’ draw on rules to steer and control military innovation performed by non-state actors.Footnote 23 They spend state funding on buying security-relevant goods and services from private corporate experts. They seek to bring the latter’s behaviour in line with their own interests by means of both general (legislative or administrative) rules, which universally apply within their jurisdiction, and specific contractual rules governing the transactions between state buyers and private providers of goods and services. Contractual funding and the specific rules that come with it work as a mode of control by stipulating the obligations of private corporate experts and making (full and continued) payments contingent on their fulfilling the contractual obligations.

Combining the distinctions between hierarchical vs. non-hierarchical and capacity-based vs. rules-based approaches, we distinguish between four modes of control (see Table 1):

  • Hard regulation designates state reliance on the indirect provision of military innovation by private corporate experts and the promulgation of rules that stipulate, either in laws, regulations, or contracts, precise and binding obligations as well as sanctions in case of non-compliance. It also implies the build-up and use of hierarchical monitoring and enforcement procedures.

    Table 1. Conceptualising modes of state control.

  • Soft regulation encourages and endorses voluntary guidelines or codes of conduct without state enforcement. Specific rules in contractual provisions are limited in number and remain flexible with regard to their implementation.

  • Centralised capacity-building grants state policymakers hierarchical leverage over private corporate experts by making their contributions dispensable. In the extreme, it allows state institutions to autonomously make and implement their policies.Footnote 24

  • Collaborative capacity-building refers to the pooling of state capacities with private corporate experts, complementing rather than substituting for private capacities. It aims to bring private corporate experts’ behaviour in line with state interests by creating incentives for sustained mutually beneficial cooperation.

These are ideal-typical distinctions, and modes may overlap in practice. Nonetheless, we can assess which mode is prevailing at a particular point in time, within a specific country, and in a particular field.

Explaining modes of control: The competence–control trade-off

Competence–control theory assumes that wherever governors rely on intermediaries, they face a trade-off.Footnote 25 Relying on highly competent intermediaries, who are rich in expertise, operational capacities, or legitimacy, enhances the effectiveness of governance. But highly competent intermediaries are difficult to control. Enhancing control reduces the benefits from relying on intermediaries because it compromises intermediaries’ competence, their independent expertise, their efficient use of operational capacities, or their legitimacy. Incompetent intermediaries pose fewer control problems. But governors want their intermediaries to be competent because this enhances governance performance, and, at least under many circumstances, intermediaries also have incentives to increase their competence, which will inadvertently be hindered by hard, hierarchical state control. Governors have to navigate a competence–control (CC) trade-off. Maximising both competence and control remains elusive.Footnote 26

We expect this trade-off to be pervasive in military innovation driven by private corporate experts.Footnote 27 Where know-how and epistemic authority are concentrated within the private sector, reliance on their unconstrained expertise promises more effective military innovation. But this comes with a loss of state control. Uncontrolled private corporate experts may (ab)use their leeway to refrain from developing desired technologies; they may develop undesired technologies; or they may put technologies to undesired uses, including their diffusion to rivalling states. Lack of control also runs counter to standards of democratic accountability for decisions over the use of force. Thus, policymakers have incentives to foster private competence and impose state control.

But the intensity of the CC trade-off and the prevalence of competence or control imperatives vary across instances of military innovation, suggesting different control modes.

When the CC trade-off is mild, as the state’s dependence on private competence is limited, the private sector does not have a clear edge over state institutions, and state concerns about control are moderate, policymakers will engage in collaborative capacity-building. In such settings, collaborative capacity-building promises to enhance overall capacities. Moreover, it endows the state with sufficient non-hierarchical avenues of influence on private actors by giving the latter a stake in continued collaboration, inducing behavioural alignment with state interests.

In dynamic contexts that call for prioritising private competence, state policymakers will seek to nurture the unrestrained competence of private corporate experts and choose soft regulation. To be sure, in some instances, collaborative capacity-building rather than soft regulation may seem most conducive to the development of private competence. This is especially so when the challenge for the state is to get an industry and its competence for innovation going, while market incentives for private innovation are lacking. But, first, for dual-use technologies, including AI, there are usually strong business incentives for market-based private innovation. Second, soft regulation that leaves business unconstrained generally enhances the speed of competence growth as it does not require lengthy public–private coordination. Finally, the more dynamic the environment for military innovation, the greater the comparative advantage of unleashed business innovation and the fewer assets the state would have to offer, through collaborative capacity-building, to industry. Accordingly, in such dynamic settings that call for unrestrained private competence, state policymakers will opt for soft regulation, contenting themselves with broad guidelines or encouraging private self-regulation. Public monitoring of compliance will be weak, and enforcement will be absent, compromising state control.

In settings that lead state policymakers to prioritise state control, they will focus on centralised state capacity-building to substitute for private corporate experts’ capacities. Instead of indirect governance, the state focuses on in-house, direct, and command-and-control-based governance that renders private contributions dispensable. This maximises the state’s hierarchical leverage. Yet, it implies foregoing the benefits of relying on the competence of specialised private corporate experts.

If there is an intense CC trade-off, as the demand for both competence and control is strong, state policymakers will opt for hard regulation. They will promulgate a large number of strict rules for private corporate experts. There will be strong oversight bodies and sanctions imposed for rule violations. While falling short of the control achieved by centralised capacity-building, hard regulation can provide policymakers with significant hierarchical control. The flip side is that (access to) private corporate experts’ competence also suffers – though less than with centralised state capacity-building.

Technological change and geopolitical pressures as drivers of the CC trade-off

CC theory helps us grasp oscillation between different modes of state control because it suggests pervasive dissatisfaction with any chosen mode and, consequently, recurring institutional reforms. But the intensity of the CC trade-off varies, and CC theory does not specify under what conditions policymakers will prioritise either competence or control. To capture the outcomes of recurring dissatisfaction-driven reforms, we draw on research on transformations in security policy-making. We highlight two drivers – technological change and geopolitical competition – that condition the intensity of the CC trade-off and state policymakers’ propensity to prioritise competence or control.

Technological change has historically challenged existing patterns of how to provide security.Footnote 28 We distinguish between strong and moderate technological change in terms of the speed and nature of technological developments. When technological innovation occurs rapidly and in a disruptive way, that is, through major breakthroughs, technological change is strong. In contrast, when technological development occurs slowly and incrementally, that is, through the gradual adaptation of existing technologies, technological change is moderate. We assume general-purpose technologies, such as AI, to be associated with stronger technological change,Footnote 29 making the stalling of technological innovation unlikely. Yet, we still expect variation in the more or less rapid speed and the more or less disruptive nature of innovation in such technologies.

Ceteris paribus, strong technological change contributes to a blurring of the civilian–military divideFootnote 30 and increases the relevance of technical experts for security policy-making.Footnote 31 When technological change is strong and its main sources come from the private sector, generalist bureaucracies will lack the technical knowledge to meet the challenges posed by technological change. While under certain conditions the state may be important to jump-start private competence for innovation, the stronger the technological change, the greater becomes the comparative advantage of unconstrained private innovation. Policymakers’ desire to control private corporate experts may well intensify as the amount of technological innovation produced by the private sector increases. Yet, their ability to control private experts through hierarchical and capacities-based means without compromising their much-needed competence declines. Therefore, state actors will be inclined to opt for rules-based modes of control, and more precisely, the non-hierarchical approach of soft regulation.

Geopolitical competition also shapes how states organise the provision of security policies.Footnote 32 Following a neoclassical realist view, which stresses material power but also includes domestic actors’ perceptions,Footnote 33 we conceive of geopolitical competition as a function of the distribution of military capabilities among leading powers and/or between the leading power and its main challenger(s), while also taking into account rivalling states’ perceptions of threat, shaped by the perceived revisionist or status-quo oriented intentions of their counterparts. Ceteris paribus, strong geopolitical competition suggests a reassertion of state control over military innovation. Under strong geopolitical competition, military innovation becomes a strategic priority. Policymakers will be ready to make vast investments in capacity-building for military innovation, while their tolerance for private corporate experts’ autonomy will be reduced. Private corporate experts good at technological advancement are less proficient in geopolitical strategising. Geopolitical priorities may even be at odds with their interests in maximising business opportunities, including with rivals of ‘their’ home country. Strong geopolitical competition thus incentivises policymakers to pursue a capacities-based and hierarchical mode of control.

There is an intricate relationship between technological change and geopolitical competition. Technological innovation helps states gain an edge over geopolitical rivals. Accordingly, geopolitical competition induces states to promote technological change in an effort to succeed in the international power race.Footnote 34 Yet, with dual-use technologies, the link between technological change and geopolitical competition is quite diffuse. Far from any change in dual-use, technology eventually impacts on geopolitical competition, and it is often hard for states to foresee the military implications of change in dual-use technologies. Moreover, for private commercial firms, their efforts at technological innovation are primarily contingent not on geopolitical pressures but rather on business opportunities. Most importantly, geopolitical competition and technological change suggest different modes of state control: Technological change calls for expertocratic governance relying on unconstrained non-state experts and tech companies, whereas geopolitical competition suggests hierarchical grand strategising by policymakers, militaries, and generalist bureaucrats.

We expect technological change and geopolitical competition to vary across states, fields, and over time. State actors may perceive technological change and geopolitical competition as – absolutely or relatively – more or less salient. We claim that the CC trade-off and the chosen control modes vary, depending on the relative strength of both drivers. If both technological change and geopolitical competition are moderate, the CC trade-off is mild, and state policymakers will engage in collaborative capacity-building. If technological change is strong but geopolitical competition is moderate, state policymakers will opt for soft regulation. If technological change is moderate, while geopolitical competition is strong, state policymakers will pursue centralised capacity-building. If both technological change and geopolitical competition are strong, state policymakers will pursue hard regulation (see Table 2).

Table 2. Explaining modes of control: varying drivers of the CC trade-off.

Yet, neither of these modes represents a stable institutional equilibrium. As maximising both private competence and state control is appealing but elusive, there are pervasive incentives for institutional re-design. This does not rule out the persistence of some institutions; yet, it implies recurring reforms of control modes, pointing in contradictory directions.

Research design and operationalisation

We apply this theory to explain modes of state control over military AI innovation in the United States. Its democratic procedures for security policy-making, capable military–bureaucratic apparatus, extensive experience in regulating big tech companies, vast investments in the spin-on of private AI innovations, and early AI institution-building efforts render the United States a security state that faces incentives to promote both (private) competence and (state) control while possessing the prerequisites for either of the two strategies. In other words, the United States should be subject to a pronounced CC trade-off; it is a typical case, suitable for empirically demonstrating a new theoretical argument and mechanism.Footnote 35 This case selection limits the generalisability of our findings. Yet, we expect the CC trade-off and our theory to apply to other advanced democratic military powers too, as (long as) they have incentives for both ensuring state control and enlisting private competence while possessing enough potential for capacity-building and regulatory capability so that their choice of control modes is not a foregone conclusion.

We focus on autonomous weapon systems (AWS), which, ‘once activated, can select and engage targets without further intervention by a human operator’.Footnote 36 Again, AWS are a typical case for the CC trade-off, lending itself to in-depth process-tracing. There should be strong incentives for both (fostering) private competence and (ensuring) state control: Private AWS experts have a competence advantage vis-à-vis the state. But there are also strong incentives to seek state control over AWS innovation due to their relevance and risks.

We test a set of theoretical expectations: First, we expect to observe a correlation over time between constellations of drivers, the prevailing imperative in the CC trade-off, and chosen control modes (see Table 2). If we do not find such a correlation, we would consider our theory falsified. We measure geopolitical competition and technological change through objectivist and perception-based indicators. To assess geopolitical competition, we first focus on the relative distribution of military capacities in the US–China power rivalry: When military spending in general as well as for AI specifically gets more even (in favour of China), we take this as an indicator of intensifying geopolitical competition.Footnote 37 Moreover, we take an increasing perception of Chinese revisionist ambitions in the US government discourse as an indication of heightened competition. Similarly, we assess the strength of technological change through AI performance, as measured by computing power used for training,Footnote 38 and perceived technological breakthroughs, captured by global patenting activityFootnote 39 as well as the identification of ‘revolutionary’ technology advancements in policy reports such as the Stanford AI Index Report.Footnote 40 While on an individual firm level, filing for patents may also reflect a desire to protect innovation in highly competitive contexts, on the macro level, global patenting activity is widely regarded as a useful indicator of technological change in a certain field.Footnote 41 Therefore, an acceleration of patenting activity and prominent references to technological game changers together indicate strong technological change.

Beyond correlational evidence, we first expect a pattern of recurrent institutional reforms, which point in different, if not contradictory, directions. Relatedly, we expect to see that any reform produces some dissatisfaction, which prepares the ground for another round of reform. Second, we check for process evidence by identifying ‘empirical fingerprints’ in official statements and documents drawing connections between a) the contextual pressure(s) policymakers are facing and the prioritisation of private competence or state control, and b) the expressed prioritisation of competence or control and the design of control modes. If we find such evidence, our confidence in our theory will be considerably strengthened.

Tracing the CC trade-off in state control over autonomous weapons systems

Focusing on the subfield of AWS, we retrace changing modes of control over private corporate experts and military AI innovation in the US national security state from the 1990s to the early 2020s. We distinguish four phases and explain the shifting control modes with the varying prevalence of technological change and geopolitical competition, which gave rise to different manifestations of the CC trade-off.

From collaborative capacity-building (1990s) to soft regulation (2000s–mid 2010s)

From the 1990s to the mid-2010s, innovation in military AI – and specifically AWS – evolved from a nascent field of R&D to a field of high military and commercial relevance. This also entailed a change in the state’s governance of innovation, which can be described as a shift from collaborative capacity-building to soft regulation.

In the 1990s, both geopolitical competition and technological change were moderate, creating a relatively mild CC trade-off. Geopolitical competition was limited compared to later (and earlier) decades, and US defence spending gradually decreased. Nevertheless, the country still considerably outspent, by about thirteen times, its (future) main competitor, China.Footnote 42 In the 1990s, there was strong hope among many US policymakers that a rising China could be integrated into a US-led global order.Footnote 43 Technological change was also moderate. The 1990s started off as the last ‘AI winter’,Footnote 44 characterised by technological stagnation and funding cutbacks, neared its end. Despite some important advances,Footnote 45 these developments only laid the groundwork for more dynamic progress in the 2000s. As OECD data indicate, global AI-related patenting would pick up only in the late 1990s and, in particular, the 2000s.Footnote 46

As expected by our theory, the resulting mild CC trade-off translated into a control mode characterised by collaborative capacity-building, where public and private capacity(-building) coexisted and was, frequently, pooled. In the 1990s, policymakers, in particular via the Defense Advanced Research Projects Agency (DARPA), were the drivers of AI-related innovation, in collaboration with academic and private experts. DARPA was critical in stimulating the establishment of AI as a field of academic research, whose beginnings are usually dated back to the Dartmouth Conference of 1959.Footnote 47 Its Strategic Computing Initiative (SCI, 1983–93) entailed big investments in public and private R&D aimed at collaborative capacity-building. However, the SCI was discontinued as it failed to achieve its ambitious goal of creating artificial general intelligence.Footnote 48 Similar to these general developments in AI innovation, policymakers sought to advance AWS through collaboration with the private sector. But again, the track record of these attempts was mixed. One of the first attempts at developing fully autonomous loitering weapons,Footnote 49 the Low-Cost Autonomous Attack System (LOCAAS) project was abandoned.Footnote 50 Doubts about the effectiveness of collaborative capacity-building increased.

In addition, technological change accelerated in the 2000s. The performance of AI models increased, with the computing power used in training notable AI models doubling approximately every eighteen to twenty-four months.Footnote 51 Moreover, from 2000 to 2010, the average yearly number of global AI patents almost tripled.Footnote 52 The 2000s saw a variety of breakthroughs, including the rise of deep learning,Footnote 53 which increased the commercial and military value of AI. The leadership role of private experts in driving cutting-edge technology development became entrenched. These included traditional defence contractors such as Boeing, Lockheed Martin, Northrop Grumman, and Raytheon (now RTX Corporation), and big tech companies, such as Google, Microsoft, and Amazon.Footnote 54 Specialised AI firms, such as Databricks, C3 AI, and Palantir, proliferated, and the competence advantage of private (commercial) actors vis-à-vis the public sector became apparent.Footnote 55

While innovation in AWS was largely driven by advances in AI, the boom of the field in the 2000s was also fuelled by the proliferation of drone technology. In the context of US interventions in the global ‘War on Terror’,Footnote 56 US drone arsenals saw a forty-fold increase from 2002 to 2010.Footnote 57 The availability of (still remotely controlled) drone technology raised interest in ‘taking the human out of the loop’.Footnote 58

This incentivised policymakers to foster private competence. Geopolitical competition was still moderate, as military spendingFootnote 59 and the perception of China as a primarily economic competitor rather than a security rivalFootnote 60 suggest. Therefore, the urge to exploit new technological possibilities by spurring growing private ingenuity ‘beyond the traditional performer base’ prevailed.Footnote 61 Competence was temporarily prioritised over control. A mode of governance described as soft regulation ensued.

In the broader military AI field, the state initially acted as an important stimulator of R&D. DARPA-funded private research in different AI subfields facilitated the breakthroughs in deep learning in the 2010s.Footnote 62 Several DARPA programs, such as the SyNAPSE Program on brain-inspired ‘neuromorphic’ computingFootnote 63 and the Autonomous Robotic Manipulation Program,Footnote 64 made large investments in the private sector to help build up AI competence. This stimulation and enlistment of private competence was supported by institutional reforms, such as the establishment of the Defense Innovation Unit Experimental (DIUx, later Defense Innovation Unit, DIU). As ‘the only DoD organisation focused on accelerating the adoption of commercial and dual-use technology to solve operational challenges at speed and scale’,Footnote 65 it served to improve the relations between the Pentagon and non-defence tech companies, reflecting the recognition that AI competence had become concentrated within the private sector.

In line with a soft (self-)regulation approach, state oversight was limited. Setting rules for AI innovation was mostly left to the private experts performing the R&D. Early industry attempts at self-regulation included the establishment of an internal AI ethics board when Google acquired DeepMind in 2014,Footnote 66 and the establishment of OpenAI as a non-profit company, which stimulated a discourse about principles of AI R&D such as openness, transparency, and respect for intellectual property rights.Footnote 67 Overall, the public laissez-faire approach incrementally gave way to initiatives endorsing industry self-governance in the mid-2010s.

Harnessing private competence for the DoD and promoting industry self-regulation were also inscribed in the mandates of new bodies such as the Defense Innovation Board (DIB, est. in 2016), which gave representatives of commercial AI companies a leadership role and was first chaired by Eric Schmidt, former Executive Chairman of Alphabet.Footnote 68 Industry initiatives included the Partnership on AI (2016), cofounded by big US tech companies,Footnote 69 and company-specific guidelines such as Google’s AI Principles.Footnote 70 The DIB’s own Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense were drafted in collaboration with experts from OpenAI, Microsoft, Facebook, and Google, among others.Footnote 71

Efforts to incentivise increasingly competent private corporate experts also intensified in the subfield of AWS, as the benefits of unconstrained private innovation capacity became apparent. State regulation and interference in AWS innovation were limited and soft. Three DARPA GRAND Challenges, organised from 2004 to 2007 and offering between 10 and 20 million USD in prize money, were particularly influential. They demonstrated the feasibility of autonomous movement in difficult environments and gave a boost to the development and commercialisation of autonomous technologies.Footnote 72 The growth in largely unconstrained private competence for AWS innovation was accompanied by a loss of state control. Despite DARPA’s critical role in stimulating early research, the US security state lost its role as the global driver of innovation in the field to the private sector, and, within a matter of years, the field of autonomous vehicle development would become dominated by private actors from the automotive and IT industries.Footnote 73

Several R&D projects aimed at enlisting private sector competence fell short of their ambitious goals and were given up. Prominent examples include the Joint Unmanned Combat Air Systems (J-UCAS) programFootnote 74 and the Army’s modernisation program on Future Combat Systems (FCS), which aimed at developing teams of networked unmanned and manned vehicles by heavily relying on industry subcontractors. The FCS program was restructured multiple times and eventually cancelled. A RAND evaluation of the program identified significant management and planning mistakes and found that it was predicated on unrealistic requirements and ‘significant leaps in technology’. This caused serious doubts about the military’s ability to make effective use of private innovation capacities.Footnote 75 A key point of criticism concerned the non-traditional partner-like relationship of the Army with its main contractor, Boeing, which produced significant oversight problems for DoD leadership, ‘especially when the government is disadvantaged in terms of workforce and skills’.Footnote 76

Thus, towards the end of the second – soft regulation – phase, the opportunity costs of this approach had become obvious. In particular, concerns about a lack of state control over increasingly powerful private experts and the state’s inability to translate private competence into military capacities of the state emerged. This created incentives for another round of reforms in the second half of the 2010s.

Centralised capacity-building (mid–late 2010s)

When slowed-down technological change was increasingly overshadowed, throughout the 2010s, by strong competition between the United States and China,Footnote 77 a pronounced geo-politicisation of AI drove a more hierarchical approach to controlling military AI innovation in the second half of the decade. The mode of control shifted towards attempts at centralised capacity-building to enhance state control over private corporate experts and military AI innovation.

While technological change certainly did not come to a halt, it became more moderate and its dynamic temporarily slowed down compared to previous and subsequent phases: The number of global AI patents, which had notably increased in the previous period, remained at a plateau during most of this period, and even dropped around the year 2016, before a drastic increase would usher in the beginning of the fourth phase of booming AI R&D in the 2020s.Footnote 78 Regarding computing power usage, the trend observable in previous decades – a constant growth according to Moore’s law – continued. This stands in stark contrast to the subsequent phase, which would be perceived as a period of historically unprecedented speed and magnitude in AI progress.Footnote 79 These broader technological trends were mirrored in the political discourse, where a strong increase in geopolitical competition moved to the forefront while the excitement about technological change subsided.

In the 2010s, the gap between US and Chinese defence spending diminished due to a significant growth in Chinese spending: In 2010, the US spent approximately seven times more than China, whereas merely five years later, it only outspent its competitor by a factor of 3.7.Footnote 80 While reliable data is scarce, estimates suggest that United States and Chinese spending on military AI reached similar levels in the mid to late 2010s.Footnote 81

In the United States, where, previously, enthusiasm about the potential of AI had dominated, these shifts in the international distribution of material capabilities were accompanied by a growing fear of falling behind an increasingly revisionist China and a new arms and technology race.Footnote 82 This was exacerbated by the publication of China’s ambition to become the leader in AI by 2030, as stated in its New Generation AI Development Plan.Footnote 83 US policymakers and military officials likened the increasing US–China competition to the Cold War context, criticised the lack of a coordinated approach to military AI innovation, and called for more – and more centralised – capacity-building to avoid a new ‘Sputnik moment’.Footnote 84

Hence, due to heightened geopolitical competition, combined with mounting dissatisfaction with the soft regulation mode of the previous phase, state control over military innovation was given precedence over (fostering) private competence. Centralised capacity-building became the dominant mode of governance. Through several reforms, new strategies, bodies, and policies were created to nurture AI capacities within the security state. Most importantly, the DoD’s Defense Innovation Initiative bluntly cited geopolitical competition as a rationale for increasing state capacities in military AI to ‘sustain and advance America’s military dominance for the 21st century’.Footnote 85 This Third Offset Strategy stood in the tradition of previous offset strategies aimed at maintaining superiority during the Cold War by investing in promising technologies. While it also responded to the diminishing role of the DoD in driving military innovation by suggesting new ways of collaborating with the private sector,Footnote 86 its focus was on strengthening the broader innovation capacity across the DoD and the military services. Hence, internal research agencies were the primary recipients of increased R&D funding.Footnote 87 This was accompanied by the development of new operational concepts and other measures to improve innovation management.Footnote 88 In sum, the initiative represents a top-down effort not only to acquire particular technologies but also to ‘change the way we innovate, operate, and do business’.Footnote 89

At the level of military AI, the expansion of in-house R&D was accompanied by institutional reforms, most importantly the establishment of the Joint Artificial Intelligence Center (JAIC, 2018) to coordinate AI efforts across the DoD, accelerate AI adoption, and educate DoD staff on AI.Footnote 90 The military services also established governance bodies to advance capacity-building, such as the Air Force AI Accelerator or the Army AI Task Force.Footnote 91

Concerning AWS, various strategy papers discussed options for enhancing AWS capabilities ‘to help ensure overmatch against increasingly capable enemies’.Footnote 92 DoD Directive 3000.09, titled ‘Autonomy in Weapons Systems’, laid out policies and assigned responsibilities for their development and use.Footnote 93 Additionally, with the Third Offset Strategy, autonomy-dedicated research budgets of the military services significantly increased in the second half of the 2010s.Footnote 94

The Navy’s UCLASS program illustrates both the aspirations for centralised capacity-building and the challenges of implementing them: Initial plans foresaw the development of a semi-autonomous combat drone that could carry large payloads. However, Navy requirements underwent several significant alterations, particularly a reduction of strike capabilities. This caused strong criticism from Congress, the Government Accountability Office, and suppliers criticising the repeated changes of course, after they had already made significant investments.Footnote 95 Eventually, efforts were redirected to the development of a drone used for refuelling to expand the range of manned fighters.Footnote 96

As predicted by our theory, the results of this centralised capacity-building approach were widely perceived as unsatisfactory, and a debate about innovation obstacles in the security state unfolded. Commonly cited problems included resistance to change and lack of expertise among the staff, as the private sector was a ‘more effective competitor for talent’.Footnote 97 The mandates of newly created AI bodies were perceived as unclear or limited. The state appeared to have failed at substituting private competence with state capacities, while at the same time struggling to enlist private expertise (beyond that of traditional defence contractors). This was ascribed to a hardware-centred innovation system, complex acquisition processes, limited data availability, and intellectual property concerns.Footnote 98 This dissatisfaction contributed to the momentum for yet another round of reforms.

Hard regulation (early 2020s)

In the early 2020s, geopolitical competition remained strong, while technological change accelerated significantly. The co-presence of these two strong drivers exacerbated the CC trade-off. Coupled with the dissatisfaction about centralised capacity-building efforts, it led to a shift towards hard regulation of heavily enlisted private corporate experts.

On the one hand, the perception of US–China rivalry reached a new urgency. While the gap between US and Chinese military expenditures had barely changed since 2015,Footnote 99 US fears of Chinese revisionism had grown. China was seen as ‘the only competitor with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military, and technological power to do it’.Footnote 100 While, in the previous phase, China had primarily been depicted as a technological rival, it was now perceived as an immediate threat to US domestic security,Footnote 101 for example, through attacks on critical infrastructure. State control over military AI innovation to limit safety and security risks associated with the diffusion of AI technology, therefore, became a critical concern.

On the other hand, from 2021 to 2022 alone, the number of AI patents increased by 62.7 per cent.Footnote 102 By the early 2020s, AI models across all domains had become much more sophisticated, and computing power usage in training was doubling every 3.4 months.Footnote 103 As this development further strengthened the advantage of private industry with more resources at its disposal, the perceived technological breakthroughs primarily originated from the commercial sector. For instance, in 2023, over 70 per cent of all ‘foundation models’ (trained on large datasets and adaptable to a large variety of tasks), originated from industry, with US big tech companies at the forefront.Footnote 104 In the subfield of AWS, increased sophistication and availability of commercially developed drone technology created additional incentives to enhance collaboration with commercial manufacturers.Footnote 105

The National Security Commission on AIFootnote 106 explicitly highlighted this dual challenge of improving collaboration with the private sector, on the one hand, and, on the other hand, avoiding excessive reliance on uncontrolled private actors. This resulted in the hard regulation of private corporate experts.

AI-related government spending to contractors strongly increased in the early 2020s,Footnote 107 and strategy papers re-emphasised the importance of collaborating with private corporate experts. At the same time, they stressed the necessity of state oversight to ensure that private innovation served the goals of technological modernisation and the maintenance of military superiority.Footnote 108 Justified with a strong national security rationale, efforts to increase state control included improved oversight over US private corporate experts as well as regulatory restrictions imposed on domestic and international technology firms. For that purpose, in ‘one of the most extensive technological blockades ever attempted’, the government restricted Chinese access to critical AI technology and banned the export of AI chips to China, causing considerable criticism from US producers, who feared competitive disadvantages.Footnote 109 This was accompanied by a new focus in the strategic orientation of existing institutions: The DIU’s importance as a ‘gateway’ to commercial innovation significantly increased due to a considerable budget increase,Footnote 110 and a newly granted authority to award other transactions (OT) agreements,Footnote 111 making it a critical contracting authority. Similarly, DARPA sought partnerships with the private sector in its two major AI investment campaigns, the AI NEXT CampaignFootnote 112 and the AI ForwardFootnote 113 initiative, while shoring up its oversight activities.

Concerning AWS, extensive enlistment of private expertise was accompanied by a growing quantity and quality of regulation. For instance, the DIU’s Blue Unmanned Aerial Systems program developed a vetted roster of drones and relevant components to allow for more efficient acquisition while ensuring that manufacturers met military needs and safety standards. This way, policymakers sought to improve not only the adoption of private expertise but also the control over it, in an attempt to counter risks such as data breaches or unauthorised access.Footnote 114 Additionally, new regulation aimed at increasing the reliability of technology by enhancing validation and verification (V&V) processes for drones operated by the DoD.Footnote 115

Efforts to increase control were not restricted to the domestic industry. While drones produced by Da-Jiang Innovations (DJI) had before been widely used by government agencies and commercial actors,Footnote 116 their access to the US airspace now raised fears of transmission of sensitive information to China.Footnote 117 Chinese companies, therefore, were the main target of new legislation prohibiting government agencies from buying drone technology manufactured in ‘adversary’ countries.Footnote 118

Thus, driven by strong technological change and strong geopolitical competition, reform efforts served the dual goal of improving the use of private competence, while, at the same time, increasing state control over the (domestic and foreign) actors driving innovation. It is too early to tell whether hard regulation will succeed in balancing competence and control and whether this mode of control will prove more stable than previous ones. Our theory raises significant doubts about this. Figure 1 summarises the predominant drivers and resulting modes of governance in the field of AWS within the four phases discussed above.

Figure 1. Drivers of CC trade-off and modes of control over AWS innovation.

Alternative explanations

While our theory captures shifting modes of control, an alternative institutional field explanation might hold that the frequent institutional change is hardly puzzling. As path-dependence,Footnote 119 vested interests,Footnote 120 and veto pointsFootnote 121 are less pronounced in a young institutional field such as military AI, institutions can change with relative ease. We agree that the unsettled character of the field has facilitated reforms. Yet, we emphasise that there has not only been a lot of institutional change in a relatively short timeframe, which might indeed be expected for an unsettled field. Rather, the field is characterised by recurring dissatisfaction-driven and partly contradictory institutional reforms. Our theory uniquely captures this particular pattern of reforms.

An explanation centred on party politics and leaders’ belief systems might claimFootnote 122 that the chosen modes of control reflect the preferences and ideological inclinations of parties and individuals in government. We agree that party-based and individual preconceptions of the appropriate relationships between the state and the market might shape the politics of military innovation. Yet, empirical patterns of varying control modes, which reflect more or less business-friendly and more or less statist approaches, largely contradict the expectations suggested by these theories. While in the 2000s, stimulating the growth of the private industry, coupled with regulatory laissez-faire, was in line with the market-friendly orientation of the Republican G. W. Bush administration, this approach continued under the first administration of Democrat Barack Obama (2009–13). Only during Obama’s second term (2013–17) did a reorientation towards centralised capacity-building emerge. Against theoretical expectations, it reached its heyday under the Republican Trump administration, before heavy reliance on private industry innovation, coupled with hard regulation, was pursued under Democrat Joe Biden in the early 2020s. Shifts in control modes correspond much better to changes in geopolitical competition and technological change than to electoral cycles and varying political leadership.

Yet another explanation might highlight the impact of business interests. According to this perspective,Footnote 123 private industry actors will lobby for policies that give them key privileges vis-à-vis international competitors. Proponents of this view might hold that governments will seek to support domestic industry by introducing more restrictive control modes in periods when domestic firms are falling behind international competitors. While we agree that some control measures, including import restrictions for Chinese drones, directly benefited US producers, there are several problems with this explanation: First, the competitive position of US firms in the broader AI/AWS field has not been correlated with the variation in control modes. While Chinese competitors have caught up over time, commercial US technology companies have remained at the forefront of R&D. Hence, the introduction of harder control modes barely seems to be a state response to save a faltering industry. Second, as patenting data indicates, Chinese competitors in the AI/AWS field have, to some extent, caught up in the last decade. Yet, it has been in this very period that the United States has shifted from an approach prioritising control (phase III) to one that sought to re-balance competence and control considerations (phase IV). Third, the introduction of hard regulation has aimed at strengthening state control over international and domestic private corporate experts. The tightening of export control measures to the detriment of US AI firms is a case in point.

Conclusion

Our process-tracing analysis has demonstrated how technological change and geopolitical competition have co-shaped the CC trade-off in military AI innovation. It has further shown that the shifting emphasis on competence or control has led to changing modes of control over private corporate experts and military innovation: from collaborative capacity-building (in the 1990s), via soft regulation (2000s to mid-2010s) and attempts of centralised capacity-building (mid to late 2010s), to hard regulation (from early 2020s).

Our findings suggest several important lessons for policymakers. Heavy reliance on private corporate experts can produce unintended and undesired effects. In a critical field of security policy-making, democratic control is reduced as non-elected and democratically unaccountable actors are empowered. Moreover, heavy reliance on private corporate experts might entail new vulnerabilities and risks for national security, as there is always some risk of private firms giving away technological or strategic secrets to foreign business partners in rival states (e.g., through the export of technology). At the same time, shored-up state control entails the risk of a militarisation of the respective technologies and industries. A strengthened role of the security state as a funder and controller of private AI innovation may in the mid- to long term negatively affect the innovation potential of private corporate experts, such as IT companies and startups, whose expertise is valued precisely because they differ from traditional defence contractors that innovate for the state as the single customer. Moreover, expansive state regulation of AI technology, especially of supply chains, could lead to economic disadvantages for the domestic industry, as technology might be banned from being sold abroad.

Future research should include further AI subfields beyond AWS, such as technology used for data collection and analysis, or AI-enabled technologies for decision-making and planning. It could also shed light on when technological change and geopolitical competition, as well as the ensuing imperatives of competence and control, induce very divergent or more convergent strategies of states. Furthermore, against the backdrop of our learnings for the AI field, future studies should extend to other areas of military innovation where private corporate actors dominate. This may include other fields with a high dual-use potential such as cyber- or biotechnology. Moreover, future research should address how policymakers in other states navigate the CC trade-off in military innovation governance. A cross-country comparison of (varying degrees of) oscillation between more and less hierarchical modes of governance would provide further evidence for the broad applicability of our argument. Generally, we would expect to find similar patterns in other capitalist and democratic powers that lead in military AI innovation, including India, Israel, and the UK. Beyond democratic powers and market economies, a study of military innovation governance in China could provide important insights. Despite China’s state-permeated economy, we would expect that the Chinese national security state also has to navigate a CC trade-off as it seeks to control (semi-)private and public experts that drive innovation in cutting-edge fields such as AI.Footnote 124

Acknowledgements

We thank the editors of European Journal of International Security and two anonymous reviewers for their excellent comments, which helped to further improve the manuscript. We are also grateful for the feedback and suggestions from Benjamin Daßler, Tim Heinkelmann-Wild, Yagnya Kodaru, Berthold Rittberger, Lorenz Sommer, Sanne Verschuuren, Moritz Weiss, and Bernhard Zangl as well as the participants of a panel at the 2024 European Initiative for Security Studies Conference in Prague and the global politics research colloquium at LMU Munich. We further thank Jonathan van Lovenberg for his very good research assistance. We gratefully acknowledge funding from the Fritz Thyssen Foundation (project ‘The Making of National Security: From Contested Complexity to Types of Security States’, Az. 10.23.2.001.PO).

Competing interests

The authors declare no competing interests.

Data Availability Statement

The data that support the findings of this study are all publicly available.

Andrea Johansen is a researcher in international relations and global governance at LMU Munich. Her research focuses on arms control of dual-use technologies. She studies under which conditions the regulation of dual-use technologies, such as biotechnology or artificial intelligence, succeeds.

Andreas Kruck is a senior researcher and lecturer in international relations and global governance at LMU Munich. His research focuses on international institutions and private actors in international security and in the global political economy. He studies transformations in the making of security policies and the institutional dynamics of global power rivalries.

References

1 The US Department of Defense defines AI as ‘the ability of machines to perform tasks that normally require human intelligence – for example, recognising patterns, learning from experience, drawing conclusions, making predictions, or taking action – whether digitally or as the smart software behind autonomous physical systems’ (U.S. Department of Defense, ‘Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity’ (2019), available at: {https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF}, accessed 29 February 2024).

2 Jeffrey Ding and Allan Dafoe, ‘Engines of power: Electricity, AI, and general-purpose, military transformations’, European Journal of International Security, 8:3 (2023), pp. 377–394; Michael C. Horowitz et al., ‘Correspondence: Military–technological imitation and rising powers’, International Security, 44:2 (2019), pp. 185–192; Michael Horowitz, The Diffusion of Military Power: Causes and Consequences for International Politics (Princeton University Press, 2010); Moritz Weiss, ‘How to become a first mover? Mechanisms of military innovation and the development of drones’, European Journal of International Security, 3:2 (2018), pp. 187–210; Tai M. Cheung, ‘A conceptual framework of defence innovation’, Journal of Strategic Studies, 44:6 (2021), pp. 775–801; Justin Haner and Denise Garcia, ‘The artificial intelligence arms race: Trends and world leaders in autonomous weapons development’, Global Policy, 10:3 (2019), pp. 331–337; Ingvild Bode and Guangyu Qiao-Franco (eds), ‘The algorithmic turn in security and warfare’, special issue, Global Society, 38:1 (2024), pp. 1–155; Jeffrey Ding, Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition (Princeton University Press, 2024).

3 Antonio Calcara, ‘Contractors or robots? Future warfare between privatization and automation’, Small Wars & Insurgencies, 33:1–2 (2021); Antonio Calcara et al., ‘Why drones have not revolutionized war: The enduring hider-finder competition in air warfare’, International Security, 46:4 (2022), pp. 130–171; Andrea Gilli and Mauro Gilli, ‘The diffusion of drone warfare? Industrial, organizational, and infrastructural constraints’, Security Studies, 25:1 (2016), pp. 50–84; Michael C. Horowitz et al., ‘Artificial intelligence and international security’ (Center for a New American Security, July 2018), available at: {https://www.cnas.org/publications/reports/artificial-intelligence-and-international-security}, accessed 6 November 2024; Michael Raska and Richard A. Bitzinger, The AI Wave in Defence Innovation: Assessing Military Artificial Intelligence Strategies, Capabilities, and Trajectories (Routledge, 2023); James Johnson, ‘Inadvertent escalation in the age of intelligence machines: A new model for nuclear risk in the digital age’, European Journal of International Security, 7:3 (2022), pp. 337–359.

4 Vincent Boulanin, Mapping the Innovation Ecosystem Driving the Advance of Autonomy in Weapon Systems (Stockholm International Peace Research Institute, 2016); SIPRI Working Paper, available at: {https://www.sipri.org/sites/default/files/Mapping-innovation-ecosystem-driving-autonomy-in-weapon-systems.pdf}, accessed 6 November 2024; Adam Grissom, ‘The future of military innovation studies’, Journal of Strategic Studies, 29:5 (2006), pp. 905–934; Michael C. Horowitz and Shira Pindyck, ‘What is a military innovation and why it matters’, Journal of Strategic Studies, 46:1 (2023), pp. 85–114; Kendrick Kuo, ‘Dangerous changes: When military innovation harms combat effectiveness’, International Security, 47:2 (2022), pp. 48–87.

5 Ingvild Bode and Hendrik Huelss, ‘Autonomous weapons systems and changing norms in international relations’, Review of International Studies, 44:3 (2018), pp. 393–413; Michael C. Haas and Sophie-Charlotte Fischer, ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’, Contemporary Security Policy, 38:2 (2017), pp. 281–306; Benjamin M. Jensen, Christopher Whyte, and Scott Cuomo, ‘Algorithms at war: The promise, peril, and limits of Artificial Intelligence’, International Studies Review, 22:3 (2020), pp. 526–550; Ingvild Bode and Hendrik Huelss, ‘Constructing expertise: The front- and back-door regulation of AI’s military applications in the European Union’, Journal of European Public Policy, 30:7 (2023), pp. 123–1254.

6 Gilli and Gilli, ‘The diffusion of drone warfare?’; Horowitz, The Diffusion of Military Power; Stephen P. Rosen, Winning the Next War: Innovation and the Modern Military (Cornell University Press, 1991); Thazha V. Paul and Norrin M. Ripsman, Globalization and the National Security State (Oxford University Press, 2010).

7 Weiss, ‘How to become a first mover?’; Linda Weiss, America Inc.? Innovation and Enterprise in the National Security State (Cornell University Press, 2014); Barry R. Posen, The Sources of Military Doctrine: France, Britain, and Germany Between the World Wars (Cornell University Press, 1984).

8 Kaija Schilde, ‘Weaponising Europe? Rule-makers and rule-takers in the EU regulatory security state’, Journal of European Public Policy, 30:7 (2023), pp. 125–1280; Todd Sandler and Keith Hartley, The Economics of Defense, Cambridge Surveys of Economic Literature (Cambridge University Press, 1995); Moritz Weiss and Vytautas Jankauskas, ‘Securing cyberspace: How states design governance arrangements’, Governance, 32:2 (2019), pp. 259–275.

9 Peter A. Hall and David W. Soskice (eds), Varieties of Capitalism: The Institutional Foundations of Comparative Advantage (Oxford University Press, 2013); Antonio Calcara and Raffaele Marchetti, ‘State–industry relations and cybersecurity governance in Europe’, Review of International Political Economy, 29:4 (2022), pp. 1237–1262.

10 Kenneth W. Abbott et al., ‘Competence versus control: The governor’s dilemma’, Regulation & Governance, 14:4 (2020), pp. 619–636.

11 Sophie-Charlotte Fischer, Andrea Gilli, and Mauro Gilli, ‘Technological change and grand strategy’, in Thierry Balzacq and Ronald R. Krebs (eds), The Oxford Handbook of Grand Strategy (Oxford University Press, 2021), pp. 221–38; Philipp Genschel and Markus Jachtenfuchs, ‘The security state in Europe: Regulatory or positive?’, Journal of European Public Policy, 30:7 (2023), pp. 1447–1457; Andreas Kruck and Moritz Weiss, ‘The regulatory security state in Europe’, Journal of European Public Policy, 30:7 (2023), pp. 1205–1229; Andreas Kruck and Moritz Weiss, ‘Disentangling Leviathan on its home turf: Authority foundations, policy instruments, and the making of security’, Regulation & Governance (2024), pp. 146–160; Weiss, ‘How to become a first mover?’

12 Calcara, ‘Contractors or robots?’; Calcara et al., ‘Why drones have not revolutionized’; Gilli and Gilli, ‘The diffusion of drone warfare?’; Horowitz, The Diffusion of Military Power; Michael C. Horowitz, ‘When speed kills: Lethal autonomous weapon systems, deterrence and stability’, Journal of Strategic Studies, 42:6 (2019), pp. 764–788; Antonio Calcara et al., ‘Will the drone always get through? Offensive myths and defensive realities’, Security Studies, 31:5 (2022), pp. 791–825.

13 Kruck and Weiss, ‘The regulatory security state in Europe’; Kruck and Weiss, ‘Disentangling Leviathan on its home turf’.

14 Kenneth W. Abbott, David Levi-Faur, and Duncan Snidal, ‘Theorizing regulatory intermediaries’, The Annals of the American Academy of Political and Social Science, 670:1 (2017), pp. 14–35; Abbott et al., ‘Competence versus control’; Kenneth W. Abbott, Philipp Genschel, Duncan Snidal, and Bernhard Zangl, ‘Two logics of indirect governance: Delegation and Orchestration’, British Journal of Political Science, 46:4 (2016), pp. 719–729.

15 Alexander L. George and Andrew Bennett, Case Studies and Theory Development in the Social Sciences (MIT Press, 2005); Derek Beach and Rasmus B. Pedersen, Process-Tracing Methods: Foundations and Guidelines (University of Michigan Press, 2019).

16 Ingvild Bode, ‘Practice-based and public-deliberative normativity: Retaining human control over the use of force’, European Journal of International Relations, 29:4 (2023), pp. 990–1016; Horowitz, ‘When speed kills’; Ingvild Bode and Anna Nadibaidze, ‘Autonomous drones’, in James P. Rogers (ed.), De Gruyter Handbook of Drone Warfare (De Gruyter, 2024), pp. 369–84; Caroline Kennedy-Pipe, James I. Rogers, and Tom Waldman, Drone Chic (Oxford Research Group, 2016).

17 Abbott et al., ‘Competence versus control’.

18 Elke Krahmann, ‘Choice, voice, and exit: Consumer power and the self-regulation of the private security industry’, European Journal of International Security, 1:1 (2016), pp. 27–48.

19 Giandomenico Majone, ‘From the positive to the regulatory state: Causes and consequences of changes in the mode of governance’, Journal of Public Policy, 17:2 (1997), pp. 139–167; Giandomenico Majone, ‘The rise of the regulatory state in Europe’, West European Politics, 17:3 (1994), pp. 77–101; David Levi-Faur, ‘The global diffusion of regulatory capitalism’, The Annals of the American Academy of Political and Social Science, 598:1 (2005), pp. 12–32; Philipp Genschel and Bernhard Zangl, ‘State transformations in OECD countries’, Annual Review of Political Science, 17:1 (2014), pp. 337–354.

20 Kruck and Weiss, ‘The regulatory security state in Europe’; Kruck and Weiss, ‘Disentangling Leviathan on its home turf’.

21 Samuel P. Huntington, The Soldier and the State: The Theory and Politics of Civil–Military Relations, (Belknap Press, 2002); Charles Tilly (ed.), The Formation of National States in Western Europe (Princeton University Press, 1975).

22 Genschel and Jachtenfuchs, ‘The security state in Europe?’

23 Moritz Weiss and Vytautas Jankauskas, ‘Securing cyberspace: How states design governance arrangements’, Governance, 32:2 (2019), pp. 259–275; Moritz Weiss, ‘Varieties of privatization: Informal networks, trust and state control of the commanding heights’, Review of International Political Economy, 28:3 (2021), pp. 662–689.

24 Erica de Bruin, ‘Mapping coercive institutions: The state security forces dataset, 1960–2010’, Journal of Peace Research, 58:2 (2021), pp. 315–325; Paul and Ripsman, Globalization and the National Security State.

25 Abbott et al., ‘Competence versus control’.

26 Ibid.

27 Andreas Kruck, ‘Governing private security companies’, in Kenneth W. Abbott et al. (eds), The Governor’s Dilemma (Oxford University Press, 2020), pp. 137–56; Weiss, ‘Varieties of privatization’.

28 Bode and Huelss, ‘Autonomous weapons systems and changing norms in international relations’; Fischer, Gilli, and Gilli, ‘Technological change and grand strategy’; Horowitz, The Diffusion of Military Power; Weiss, ‘How to become a first mover?’; Cheung, ‘A conceptual framework of defence innovation’.

29 Ding, Technology and the Rise of Great Powers; Ding and Dafoe, ‘Engines of power’.

30 Ding and Dafoe, ‘Engines of power’.

31 Myriam Dunn Cavelty and Andreas Wenger, ‘Cyber security meets security politics: Complex technology, fragmented politics, and networked science’, Contemporary Security Policy, 41:1 (2020), pp. 5–32; Vicky Karyoti, Olivier Schmitt, and Amelie Theussen, ‘Great powers and war in the 21st century: Blast from the past’, in Artur Gruszczak and Sebastian Kaempf (eds), Routledge Handbook of the Future of Warfare ( Routledge Taylor & Francis, 2024).

32 Genschel and Jachtenfuchs, ‘The security state in Europe’; R. D. Kelemen and Kathleen R. McNamara, ‘State-building and the European Union: Markets, war, and Europe’s uneven political development’, Comparative Political Studies, 55:6 (2022), pp. 963–991; Cheung, ‘A conceptual framework of defence innovation’.

33 Paul and Ripsman, Globalization and the National Security State; Steven E. Lobell, Norrin M. Ripsman, and Jeffrey W. Taliaferro, Neoclassical Realism, the State, and Foreign Policy (Cambridge University Press, 2009); on external threat perceptions as drivers of innovation see Mark Z. Taylor, The Politics of Innovation: Why Some Countries Are Better than Others at Science & Technology (Oxford University Press, 2016).

34 Horowitz, The Diffusion of Military Power; Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Anthem Press, 2013).

35 Beach and Pedersen, Process-Tracing methods; John Gerring, Case Study Research, 2nd ed. (Cambridge University Press, 2017).

36 U.S. Department of Defense, Directive 3000.09: Autonomy in Weapon Systems (Department of Defense, 2012).

37 We rely on military expenditure data by SIPRI and more spotty data on AI-specific spending provided by policy analysts (Margarita Konaev et al., ‘U.S. military investments in autonomy and AI: A budgetary assessment’ [Center for Security and Emerging Technology, October 2020]; CSET Policy Brief, available at: {https://cset.georgetown.edu/publication/u-s-military-investments-in-autonomy-and-ai-a-budgetary-assessment/}, accessed 6 November 2024; Danielle C. Tarraf et al., ‘The Department of Defense posture for artificial intelligence: Assessment and recommendations’ [RAND Corporation, 2019], available at: {https://www.rand.org/pubs/research_reports/RR4229.html}).

38 See Goldfarb and Lindsay, ‘Prediction and judgment’; Andrew J. Lohn and Micah Musser, ‘AI and compute’, CSET Issue Brief (January 2022), pp. 5–6, available at: {https://cset.georgetown.edu/publication/ai-and-compute/}, accessed 6 November 2024.

39 Patrick Thomas and Dewey Murdick, ‘Patents and artificial intelligence: A primer’, Center for Security and Emerging Technology, CSET Data Brief (September 2020), available at: {https://cset.georgetown.edu/wp-content/uploads/CSET-Patents-and-Artificial-Intelligence-1.pdf}, accessed 6 November 2024.

40 Nestor Masleij et al., ‘The AI index 2024 annual report’ (12 July 2024), available at: {https://aiindex.stanford.edu/report/}, accessed 12 July 2024.

41 Thomas and Murdick, ‘Patents and Artificial Intelligence: A primer’; Daniel Zhang et al., ‘The AI index 2021 annual report’ (2021), https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf (accessed 9 August 2024), 181.

42 SIPRI, ‘SIPRI military expenditure database’, available at: {https://milex.sipri.org/sipri}, accessed 17 September 2024.

43 Benjamin Miller, Grand Strategy from Truman to Trump, with the assistance of Ziv Rubinovitz (University of Chicago Press, 2020), ch. 7 & 8.

44 Historic overviews (e.g., Jade Leung, ‘Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies’, PhD thesis, University of Oxford, 2019; Stephan de Spiegeleire and Matthijs Maas, ‘Artificial intelligence and the future of defense: Strategic implications for small- and medium-sized force providers’, The Hague Centre for Strategic Studies, available at: {https://hcss.nl/report/artificial-intelligence-and-the-future-of-defense/}, accessed 6 November 2024) often depict the early history of the AI field as a series of ‘summers’ and ‘winters’, that is, an evolution where significant breakthroughs, in both software and hardware development, alternated with phases of stagnation and declining investments.

45 Leung, ‘Who will govern artificial intelligence?’; Spiegeleire and Maas, ‘Artificial intelligence and the future of defense’; Weiss, ‘How to become a first mover?’

46 OECD, ‘Measuring the digital transformation: A roadmap for the future’ (Organisation for Economic Co-operation and Development, 2019), p. 32.

47 National Research Council, Funding a Revolution: Government Support for Computing Research (National Academy Press, 1999).

48 Richard Fikes and Tom Garvey, ‘Knowledge representation and reasoning – A history of DARPA leadership’, AI Magazine, 41:2 (2020), pp. 9–21; Vincent Boulanin and Maaike Verbruggen, ‘Mapping the development of autonomy in weapon systems’ (Stockholm International Peace Research Institute, November 2017); SIPRI Report, available at: {https://www.sipri.org/publications/2017/policy-reports/mapping-development-autonomy-weapon-systems}, accessed 6 November 2024; Leung, ‘Who will govern artificial intelligence?’

49 Loitering weapons can be considered a hybrid between guided munitions and unmanned combat aerial systems. They are capable of loitering over an extended period of time and can autonomously find and attack targets.

50 Barry D. Watts, ‘Six decades of guided munitions and battle networks: Progress and prospects’, Center for Strategic and Budgetary Assessments (March 2007), pp. 280–3, available at: {https://csbaonline.org/uploads/documents/2007.03.01-Six-Decades-Of-Guided-Weapons.pdf}, accessed 4 June 2024.

51 Lohn and Musser, ‘AI and compute’; CSET Issue Brief, p. 4, available at: {https://cset.georgetown.edu/publication/ai-and-compute/}, accessed 6 November 2024; OpenAI, ‘AI and compute’, available at: {https://openai.com/index/ai-and-compute/, accessed 9 August 2024.

52 Zhang et al., ‘The AI index 2021 annual report’, p. 31; OECD, ‘Measuring the digital transformation’.

53 Leung, ‘Who will govern artificial intelligence?’, p. 255.

54 J. Brustein and M. Bergen, ‘Google wants to do business with the military – Many of its employees don’t’, Bloomberg.com (21 November 2019), available at: {https://www.bloomberg.com/features/2019-google-military-contract-dilemma/}, accessed 6 November 2024.

55 Shaleen Khanal, Hongzhou Zhang, and Araz Taeihagh, ‘Why and how is the power of big tech increasing in the policy process? The case of generative AI’, Policy and Society, 44:1 (2024), pp. 52–69; Araz Taeihagh, ‘Governance of artificial intelligence’, Policy and Society, 40:2 (2021), pp. 137–157.

56 James M. Page and John Williams, ‘Drones, Afghanistan, and beyond: Towards analysis and assessment in context’, European Journal of International Security, 7:3 (2022), pp. 283– 303; Christian Enemark, ‘The enduring problem of “grey” drone violence’, European Journal of International Security, 7:3 (2022), pp. 304–321.

57 Abigail R. Hall and Christopher J. Coyne, ‘The political economy of drones’, Defence and Peace Economics, 25:5 (2013), p. 453.

58 U.S. Joint Forces Command, ‘Unmanned effects (UFX): Taking the human out of the loop’), Rapid Assessment Process (RAP) Report #03-10 (2003), available at: {http://edocs.nps.edu/dodpubs/org/JFC/RAPno.03-10.pdf}, accessed 6 November 2024.

59 SIPRI, ‘SIPRI military expenditure database’.

60 Miller, Grand Strategy from Truman to Trump, ch. 8.

61 DARPA, ‘The DARPA grand challenge: Ten years later’, DARPA Press Release (13 March 2014), available at: {https://www.darpa.mil/news/2014/grand-challenge-ten-years-later}, accessed 6 November 2024 .

62 Joshua Alspector and Thomas G. Dietterich, ‘DARPA’s role in machine learning’, AI Magazine, 41:2 (2020), pp. 36–48.

63 DARPA, ‘SyNAPSE program develops advanced brain-inspired chip’, DARPA Press Release (7 August 2014), available at: {https://www.darpa.mil/news/2014/synapse-program-brain-inspired-chip}, accessed 6 November 2024.

64 Nancy Owano, ‘DARPA robotic hand prototype shows advanced moves’, Phys.org (2 May 2013), available at: {https://phys.org/news/2013-05-darpa-robotic-prototype-advanced-video.html}, accessed 12 September 2024.

65 Defense Innovation Unit, ‘About DIU’, available at: {https://www.diu.mil/}, accessed 18 September 2024.

66 Patrick Lin and Evan Selinger, ‘Inside Google’s mysterious ethics board’, Forbes (3 February 2014), available at: {https://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/}, accessed 9 October 2024.

67 OpenAI, ‘Introducing OpenAI’, available at: {https://openai.com/index/introducing-openai/}, accessed 9 October 2024.

68 Satoru Mori, ‘US defense innovation and artificial intelligence’, Asia-Pacific Review, 25:2 (2018), pp. 16–44.

69 Alex Hern, ‘“Partnership on AI” formed by Google, Facebook, Amazon, IBM and Microsoft’, Guardian (28 September 2016), available at: {https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms}, accessed 9 October 2024.

70 Sundar Pichai, ‘AI at Google: Our principles’, Google (7 June 2018), available at: {https://blog.google/technology/ai/ai-principles/}, accessed 9 October 2024; Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The global landscape of AI ethics guidelines’, Nature Machine Intelligence, 1:9 (2019), pp. 389–399.

71 Defense Innovation Board, ‘AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense’ (2019), available at: {https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF}, accessed 27 September 2024.

72 Weiss, America Inc.?

73 Boulanin and Verbruggen, ‘Mapping the development of autonomy in weapon systems’.

74 Boulanin and Verbruggen, ‘Mapping the development of autonomy in weapon systems’; Jan Tegler, ‘GIANT STEPS: DARPA’s X-planes and the quest to redefine the boundaries of flight’, in Defense Advanced Research Projects Agency 1958–2018 (Washington, D.C., 2018), pp. 38–45, available at: https://www.darpa.mil/sites/default/files/attachment/2025-02/magazine-darpa-60th-anniversary.pdf, accessed 25 October 2025.

75 Christopher G. Pernin et al., ‘Lessons from the Army’s future combat systems program’ (RAND Corporation, 2012), prepared for the US Army, available at: {https://www.rand.org/content/dam/rand/pubs/monographs/2012/RAND_MG1206.sum.pdf}, 6 November 2024.

76 U.S. Government Accountability Office, ‘Defense acquisitions: Issues to be considered for Army’s modernization of combat systems’, Statement of Paul L. Francis, Managing Director Acquisition and Sourcing Management (GAO-09-793T) (Washington, DC, 16 June 2009), p. 7, available at: {https://www.gao.gov/assets/gao-09-793t.pdf}, accessed 12 September 2024.

77 Patrick Porter, ‘Advice for a dark age: Managing great power competition’, The Washington Quarterly, 42:1 (2019), pp. 7–25; Xinbo Wu, ‘On Sino–U.S. strategic competition’ World Economics and Politics, 5:1 (2020), pp. 96–130.

78 Zhang et al., ‘The AI index 2021 annual report’, p. 31.

79 Lohn and Musser, ‘AI and compute’.

80 SIPRI, ‘SIPRI military expenditure database’.

81 Konaev et al., ‘U.S. military investments in autonomy and AI’; Ashwin Acharya and Zachary Arnold, ‘Chinese public AI R&D spending: Provisional findings’ (Center for Security and Emerging Technology, December 2019); CSET Issue Brief, {https://cset.georgetown.edu/wp-content/uploads/Chinese-Public-AI-RD-Spending-Provisional-Findings-1.pdf}, accessed 12 July 2024.

82 U.S. Department of Defense, ‘Military and security developments involving the People’s Republic of China 2018: Annual report to Congress’ (2018), available at: {https://media.defense.gov/2018/Aug/16/2001955282/-1/-1/1/2018-CHINA-MILITARY-POWER-REPORT.PDF}, accessed 3 April 2023; U.S. Department of Defense, ‘Military and security developments involving the People’s Republic of China 2019: Annual report to Congress’ (2019), available at: {https://media.defense.gov/2019/May/02/2002127082/-1/-1/1/2019_CHINA_MILITARY_POWER_REPORT.pdf}, accessed 3 April 2023.

83 Jeffrey Ding, Deciphering China’s AI Dream: The Context, Components, Capabilities, and Consequences of China’s Strategy to Lead the World in AI (Future of Humanity Institute, University of Oxford, 2018), available at: {https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf}, accessed 24 March 2023; Tai M. Cheung, Eric Anderson, and Fan Yang, ‘Chinese defense industry reforms and their implications for US–China military technological competition’, Study of Innovation and Technology in China (SITC) Research Brief (4 January 2017), available at: {https://escholarship.org/uc/item/84v3d66k}, accessed 6 November 2024.

84 Ashton Carter, ‘The path to the innovative future of defense, keynote address at the Center for Strategic and International Studies’, Center for Strategic and International Studies (28 October 2016), available at: {https://csis-website-prod.s3.amazonaws.com/s3fs-public/event/161028_Secretary_Ashton_Carter_Keynote_Address_The_Path_to_the_Innovative_Future_of_Defense.pdf}, accessed 27 March 2023; Colin Clark, ‘Our artificial intelligence “Sputnik moment” is now: Eric Schmidt & Bob Work’, Breaking Defense (1 November 2017), available at: {https://breakingdefense.com/2017/11/our-artificial-intelligence-sputnik-moment-is-now-eric-schmidt-bob-work/}, accessed 30 March 2023.

85 Chuck Hagel, ‘Reagan National Defense Forum keynote’ (15 November 2014), available at: {https://www.defense.gov/News/Speeches/Speech/Article/606635/}, accessed 6 November 2024.

86 Lauren A. Kahn, ‘Risky incrementalism: Defense AI in the United States’, in Heiko Borchert, Torben Schütz, and Joseph Verbovszky (eds), The Very Long Game (Springer Nature Switzerland, 2024), pp. 39–61, 46.

87 Boulanin and Verbruggen, ‘Mapping the development of autonomy in weapon systems’.

88 Gian P. Gentile et al., ‘A history of the third offset, 2014–2018’, Research Report (RAND Corporation, 2021), available at: {https://www.rand.org/pubs/research_reports/RRA454-1.html}, accessed 19 March 2025.

89 Hagel, ‘Reagan National Defense Forum keynote’.

90 U.S. Department of Defense, ‘Memo on JAIC Establishment’ (2018), available at: {https://admin.govexec.com/media/establishment_of_the_joint_artificial_intelligence_center_osd008412-18_r…pdf}, accessed 6 November 2024.

91 Tarraf et al., ‘The Department of Defense posture for artificial intelligence’.

92 U.S. Army, ‘Robotic and autonomous systems strategy’ (2017), available at: {https://mronline.org/wp-content/uploads/2018/02/RAS_Strategy.pdf}, accessed 8 June 2024.

93 U.S. Department of Defense, ‘Autonomy in weapon systems: DoD Directive 3000.09’ (2012), available at: {https://ogc.osd.mil/Portals/99/autonomy_in_weapon_systems_dodd_3000_09.pdf}, accessed 6 November 2024.

94 Boulanin and Verbruggen, ‘Mapping the development of autonomy in weapon systems’; Jesse Ellman et al., ‘Defense acquisition trends, 2016: The end of the contracting drawdown’, Report of the CSIS Defense360 Series on Strategy, Budget, Forces, and Acquisition (Washington, D.C., Center for Strategic and International Studies, March 2017), available at: {https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/170309_Ellman_AcquisitionTrends2016_Web.pdf}, accessed 6 November 2024.

95 Jeremiah Gertler, ‘History of the Navy UCLASS program requirements: In brief’, Congressional Research Service (3 August 2015), available at: {https://sgp.fas.org/crs/weapons/R44131.pdf}, accessed 7 June 2024; U.S. Government Accountability Office, ‘Unmanned carrier-based aircraft system: Navy need to demonstrate match between its requirements and available resources’, Report to Congressional Committees (May 2015), available at: {https://www.gao.gov/assets/gao-15-374.pdf}, accessed 7 June 2024.

96 Sydney J. Freedberg, ‘Good-bye, UCLASS; Hello, unmanned tanker, more F-35Cs in 2017 budget’, Breaking Defense (1 February 2016), available at: {https://breakingdefense.com/2016/02/good-bye-uclass-hello-unmanned-tanker-more-f-35cs-in-2017-budget/}, accessed 7 June 2024.

97 Defense Science Board, ‘Summer study on autonomy’, U.S. Department of Defense (June 2016), p. 37, available at: {https://www.hsdl.org/?abstract&did=794641}, accessed 6 November 2024.

98 Tarraf et al., ‘The Department of Defense posture for artificial intelligence’; National Security Commission on AI, ‘Final report’; U.S. Government Accountability Office, ‘Military acquisitions’.

99 SIPRI, ‘SIPRI military expenditure database’.

100 The White House, ‘National security strategy’ (2022), p. 23, available at: {https://www.whitehouse.gov/wp-content/uploads/2022/10/Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf}, accessed 17 September 2024.

101 U.S. Department of Defense, ‘Report on the military and security developments involving the People’s Republic of China (CMPR)’, Report to Congress (2023), pp. 22, 95, 170–1, available at: {https://media.defense.gov/2023/Oct/19/2003323409/-1/-1/1/2023-MILITARY-AND-SECURITY-DEVELOPMENTS-INVOLVING-THE-PEOPLES-REPUBLIC-OF-CHINA.PDF}, accessed 17 September 2024.

102 Masleij et al., ‘The AI index 2024 annual report’, p. 12.

103 Lohn and Musser, ‘AI and compute’, p. 4; OpenAI, ‘AI and compute’.

104 Masleij et al., ‘The AI index 2024 annual report’, pp. 30–4.

105 Paula Höhrová, Jakub Soviar, and Włodzimierz Sroka, ‘Market analysis of drones for civil use’, LOGI – Scientific Journal on Transport and Logistics, 14:1 (2023), p. 55.

106 National Security Commission on AI, ‘Final report’.

107 Zhang et al., ‘The AI index 2021’, pp. 168–70.

108 U.S. Department of Defense, ‘2023 data, analytics and artificial intelligence adoption strategy: Accelerating decision advantage’ (November 2023), available at: {https://media.defense.gov/2023/Nov/02/2003333301/-1/-1/1/DAAIS_FACTSHEET.PDF}, accessed 7 February 2024.

109 Ana Swanson and Claire Fu, ‘With smugglers and front companies, China is skirting American A.I. bans’, New York Times (4 August 2024), available at: {https://www.nytimes.com/2024/08/04/technology/china-ai-microchips.html}, accessed 18 September 2024.

110 U.S. Department of Defense, ‘Executive summary: DoD data strategy: Unleashing data to advance the national defense strategy’ (2020), available at: {https://media.defense.gov/2020/Oct/08/2002514180/-1/-1/0/DOD-DATA-STRATEGY.PDF}, accessed 6 November 2024.

111 Other transactions (OTs) are designed to circumvent traditional federal acquisition regulations (FAR) and their (significantly higher) bureaucratic hurdles. They are intended to incentivise commercial companies’ participation in defence-related projects; Marijn Hoijtink, ‘“Prototype warfare”: Innovation, optimisation, and the experimental way of warfare’, European Journal of International Security, 7:3 (2022), pp. 322–336.

112 DARPA, ‘DARPA announces $2 billion campaign to develop next wave of AI technologies’, available at: {https://www.darpa.mil/news-events/2018-09-07}, accessed 18 September 2024.

113 DARPA, ‘AI forward: Reimagining the future of artificial intelligence for national security’, available at: {https://www.darpa.mil/work-with-us/ai-forward}, accessed 18 September 2024.

114 Defense Innovation Unit, ‘UAS solutions for the U.S. DoD’, available at: {https://www.diu.mil/blue-uas}, accessed 8 June 2024.

115 Congressional Research Service, ‘Defense primer: U.S. policy on lethal autonomous weapon systems’ (1 February 2024), available at: {https://crsreports.congress.gov/product/pdf/IF/IF11150}, accessed 12 June 2024.

116 Höhrová, Soviar and Sroka, ‘Market analysis of drones for civil use’, p. 61.

117 U.S. Congress, ‘Unmanned Aerial Security Act’, available at: {https://www.congress.gov/congressional-report/118th-congress/house-report/151/1?outputFormat=pdf}, accessed 18 September 2024.

118 Donald J. Trump, Executive Order 13981 – Protecting the United States from Certain Unmanned Aircraft Systems (2021), available at: {https://www.govinfo.gov/content/pkg/DCPD-202100037/pdf/DCPD-202100037.pdf}, accessed 8 June 2024.

119 Paul Pierson, ‘Increasing returns, path dependence, and the study of politics’, American Political Science Review, 94:2 (2000), pp. 251–267.

120 Terry M. Moe, The Politics of Institutional Reform: Katrina, Education, and the Second Face of Power (Cambridge University Press, 2019).

121 George Tsebelis, Veto Players: How Political Institutions Work (Princeton University Press, 2011).

122 See Michael H. Armacost, The Politics of Weapons Innovation: The Thor-Jupiter Controversy (Columbia University Press, 1969); Daniel L. Byman and Kenneth M. Pollack, ‘Let us now praise great men: Bringing the statesman back in’, International Security, 25:4 (2001), pp. 107–146; Margaret G. Hermann et al., ‘Who leads matters: The effects of powerful individuals’, International Studies Review, 3:2 (2001), pp. 83–131.

123 Kaija Schilde, The Political Economy of European Security (Cambridge University Press, 2017); Moritz Weiss and Nicolas Krieger, ‘The political economy of cybersecurity: Governments, firms and opportunity structures for business power’, Contemporary Security Policy 64:3 (2025), pp. 403–428.

124 Hoijtink, ‘Prototype warfare’.

Figure 0

Table 1. Conceptualising modes of state control.

Figure 1

Table 2. Explaining modes of control: varying drivers of the CC trade-off.

Figure 2

Figure 1. Drivers of CC trade-off and modes of control over AWS innovation.