Introduction
In March 2024, the United States (US) Army awarded approximately $178 million to software company Palantir for developing and building the Tactical Intelligence Targeting Access Node, or TITAN. TITAN is a ground system integrating artificial intelligence (AI) and machine learning (ML) technologies for processing intelligence and supporting targeting.Footnote 1 Palantir subsequently released a promotional video which visually demonstrates how TITAN would work, claiming that the system ‘delivers AI at the edge that can be tailored to the mission, ensuring soldiers have the latest and most relevant models wherever the mission takes them, even as the threat landscape evolves’.Footnote 2 One year earlier, in May 2023, another US technology company named after a Lord of the Rings prop – a sword, not an all-seeing orb – had unveiled its own military operating system in an eight-minute video. Anduril promotes its software program Lattice as ‘a paradigm shift’ in the way militaries fight, pointing out that ‘affordable mass’ should be the new goal of the US military: ‘massive amounts of lower-cost, more intelligent, more attritable military systems’ under the command of ‘a single human operator’ through software integrating AI and ML technologies.Footnote 3
In recent years, the production of sensational audiovisual material has become an important practice for commercial developers of AI- and ML-based military decision-support systems to promote their products.Footnote 4 But these promotional materials are more than simple advertisements through which technology companies attempt to convince those in power to buy their products. In this article, we show that private actors construct claims of epistemic authority on the future of war through digitally constructed product presentations, such as the videos introducing Palantir’s TITAN and Anduril’s Lattice and which we coin as ‘virtual military demonstrations’. We argue that these demonstrations entrench particular regimes of truth and knowledge production that shape contemporary understandings of algorithmic warfare as a strategic and moral imperative for the survival of Western democracies.
We develop our argument in two ways. First, we contend that virtual military demonstrations provide defence tech companies with the technological capabilities to claim epistemic authority on the role of AI and ML in coming wars. By displaying technical and military expertise in a digitally enhanced audiovisual spectacle, commercial actors present themselves as deeply knowledgeable on the conduct of future conflicts, which they suggest will be defined by large quantities of data, AI- and ML-based data analysis and decision-support systems, and massive amounts of expendable military systems – what they call ‘attritable mass’ – to conduct military operations. Virtual demonstrations allow companies to claim expertise while simultaneously pursuing commercial and political goals. By portraying the rapid adoption of AI and ML technologies in the military as a strategic necessity, tech corporations persuade governments and militaries to invest large amounts of financial, technological and human resources in the development and acquisition of their algorithmic tools of war.
Second, we argue that virtual military demonstrations allow commercial actors to produce knowledge that reinforces an image of war as a clean and efficient phenomenon. The visual representations of algorithmic warfare in the companies’ promotional materials obscure some of the harshest realities of armed conflict: civilian injuries and deaths, infrastructural destruction, technical malfunctions, and decision-making errors. Moreover, the demonstrations, and the military decision-support systems that are promoted in them, are discursively contextualised as part of an ongoing Manichean struggle between Western democracies and their geopolitical rivals. Consequently, private technology companies’ virtual demonstrations transform perceptions of (algorithmic) warfare as a complex political phenomenon and inherently cruel practice of organised violence into a clean, controllable, and precise business that is framed not only as a strategic but also as a moral imperative to ‘safeguard the legitimacy of the democratic project itself’.Footnote 5 To illustrate our argument, we conduct in-depth case studies of virtual demonstrations featuring two AI- and ML-enabled military decision-support systems: Palantir’s TITAN and Anduril’s Lattice.
We make two unique contributions: first, we strengthen the existing scholarship at the intersection of International Relations (IR), Critical Security Studies, and Science and Technology Studies (STS) that focuses on the rising influence of non-traditional defence companies in the development of military software and hardware.Footnote 6 Against the backdrop of contemporary wars in Ukraine and Gaza, both Big Tech and smaller defence tech companiesFootnote 7 have taken up a central position in the advance of security and military technologies, especially in the sphere of AI and ML development.Footnote 8 Technology firms such as Palantir, Anduril, Shield AI, and Helsing, but also established multinational corporations such as Google and Microsoft, are developing and promoting their products in ways that embed certain discourses and materialise long-lasting visions of future warfare, effectively reformatting these imaginations into the realities of contemporary warfighting.Footnote 9
We conceptualise this evolution as a novel iteration of James Der Derian’s notion of ‘virtuous war’, which he describes as ‘the technical capability and ethical imperative to threaten and, if necessary, actualize violence from a distance—with no or minimal casualties’.Footnote 10 However, whereas Der Derian argued that the US government and military were leading the ‘virtual revolution’,Footnote 11 we now increasingly see private companies in the driver’s seat.Footnote 12 By exposing how virtual demonstrations provide these companies with the technical means to claim epistemic authority, allowing them to produce knowledge and propagate their AI- and ML-based decision-support systems as strategic and ethical imperatives for Western militaries, this article not only expands the conceptualisation of the role of private technology corporations in warfareFootnote 13 but also contributes to the Critical Security Studies literature that focuses on the role of visuals in the normalisation and legitimation of martial violence.Footnote 14
Second, our empirical analysis provides original insights into the visions, discourses, and practices of two prominent US-based technology companies, Palantir and Anduril. Both companies are archetypical cases of non-traditional defence firms that have developed and demonstrated military decision-support software. They are quickly becoming central actors within the international defence ecosystem,Footnote 15 by claiming that their AI-based software programs enable and facilitate ‘better’, ‘faster’, and ‘more accurate’ methods of war, necessary to ‘safeguard’ Western democracies in the coming years.Footnote 16 Both cases therefore accurately illustrate how private tech companies are assuming an increasingly important role in the way knowledge about algorithmic warfare is produced and perceived.
The paper proceeds as follows. The first section links the rise of the military-industrial-commercial complex and the technological evolution towards the incorporation of AI and ML on modern battlefields with Der Derian’s notion of ‘virtuous war’ and Roberto Gonzalez’s concept of ‘virtual war’, to understand the increasing military role of technology companies. We then develop a conceptualisation of virtual military demonstrations and explain how this international practice can shape perceptions of (algorithmic) warfare by allowing companies to claim epistemic authority and fix knowledge production regimes. In the third section, we clarify our methodological choices, after which we empirically illustrate our main arguments through an assessment of how Palantir and Anduril virtually advertise their AI- and ML-based military decision-support systems TITAN and Lattice. We conclude by reflecting on how future research on the role of technology companies in war can be conducted.
Digital technologies, virtual/virtuous war, and the role of tech companies
As the information age advances, our societies are increasingly organised around digital data and systems to analyse them.Footnote 17 Militaries partake in this societal evolution. Ongoing armed conflicts highlight militaries’ drive to integrate AI and ML technologies in a variety of applications.Footnote 18 Among them are AI- and ML-based decision-support systems that military personnel employ in decision-making, including for decisions related to targeting.Footnote 19 The technological evolution that materialises on the battlefields of Ukraine and Gaza today borders on what Gonzalez has described as the ‘technological fantasy’ of virtual war:
war conducted by robotic systems, some of which are being programmed for ethical decision-making; the emergence of Silicon Valley as a major center for defense and intelligence work; algorithmically driven propaganda campaigns and psychological operations (psyops) developed and deployed through social media platforms; next-generation social science models aimed at discovering what drives human cooperation and social instability; and predictive modelling and simulation programs, including some ostensibly designed to foresee future conflict.Footnote 20
The vision of virtual war is not new. It is rooted in the assemblage of subjects, resources, technologies, and practices enabling the instantaneous collection, processing, and analysis of large amounts of digital data across geographical and temporal spaces and infrastructures for military purposes. It enables armies and audiences to physically disconnect from the fields of battle.Footnote 21 But most of all, virtual war signifies a long-standing techno-military fetish: waging war with little to no human intervention in order to eliminate human suffering and death (of soldiers, not civilians). Two and a half decades ago, Der Derian already suggested that this virtual conception of war constitutes the claim of a ‘virtuous’ war, at the basis of which lies a ‘technical capability’ to use ‘networked information and virtual technologies to bring “there” here in near-real time and with near-verisimilitude’ in order to conduct a ‘bloodless, humanitarian, hygienic’ war from a distance, thus rendering it a ‘strategic advantage’ and an ‘ethical imperative’.Footnote 22
Over the past three decades, the US government and its ‘traditional’ military-industrial complex have led the way in the global shift to remote warfare, enabled by advances in digital technologies.Footnote 23 Today, the ‘virtualization of violence’Footnote 24 increasingly relies on commercial actors providing the technological tools – algorithms, ML models, training facilities, data centres – needed to maintain and improve the flow and analyses of data. As a consequence, Silicon Valley is rapidly turning into a third leg of the US military-industrial complex.Footnote 25 As Gould et al. have illustrated, it is now more accurate to speak of a military-industrial-commercial complex, where ‘militaries, the “traditional” defense industry and the commercial technology sector engage in new partnerships’, following the private sector’s lead in terms of AI and ML innovation.Footnote 26
This evolution has spurred what Hoijtink and Planqué-van Hardeveld have coined the ‘platformisation’ of the military.Footnote 27 Big Tech companies such as Google are providing Western militaries with platforms acting as digital infrastructures ‘for doing, facilitating, and experimenting with ML for [military] decision-making’.Footnote 28 This platformisation of the military is also driven by smaller-scale technology firms, often founded by ‘visionary’ technologists who are guided through the unpredictable and tricky business and defence landscape by politically well-connected venture capitalists.Footnote 29 Companies such as Palantir, Anduril, and Shield AI are building comprehensive software programs – in addition to military hardware like autonomous or uncrewed armed vehicles – for the purposes of collecting, processing, and analysing large amounts of data via a network of sensors and systems. Advertised and sold as AI-driven decision-support systems that reportedly increase the efficiency, speed, and accuracy of military operations, these software programs seemingly materialise the long-standing dream of virtual warfare.Footnote 30
States and militaries seek to integrate such systems into targeting decision-making, which is normally a complex process involving several actors and steps. AI- and ML-based systems can be used for various purposes as part of this process, including for processing large volumes of data, detecting individuals or objects of interest, getting real-time information on adversary positions, or suggesting potential courses of action.Footnote 31 Ukrainian and Israeli armed forces are known to rely on different types of decision-support systems, while the US Department of Defense (DoD) pursues its development of the Project Maven programme.Footnote 32 So far, decision-support systems have not featured as prominently in the debate on military applications of AI, which has mainly focused on (lethal) autonomous weapon systems (LAWS).
However, the integration of AI technologies and software programs into military decision-making and operation planning ‘may be much more influential’ than LAWS.Footnote 33 As military and commercial interests in decision-support systems become increasingly entangled, developments in this area require more scholarly attention. This article paves the way for future analyses of such systems, as well as the interests and normative roles of the privately owned companies that develop them. Our starting point is the introduction of these systems in the public domain via audiovisual demonstrations in and through digitally enhanced ‘virtual’ environments – or, as we call them, virtual military demonstrations.
Virtual military demonstrations and claims of epistemic authority
States, militaries, and weapon manufacturers have a long-standing tradition of demonstrating and advertising novel military capabilities and weapons systems through exercises, weapons tests, experiments, exhibitions, or employment.Footnote 34 Existing studies indicate that actors showcase military capabilities to deter or coerce others by signalling resolve or intent,Footnote 35 to enhance cooperation between allies and multilateral networks,Footnote 36 to convince potential buyers,Footnote 37 or to elevate status.Footnote 38
But military demonstrations have more than just a rational and linear strategic or commercial impact. As public spectacles recurrently performed and communicated through audiovisual means, showcasings of military capabilities are international practices that visually impress spectators and so construct what is considered to be reality.Footnote 39 They filter what is (un)seen and therefore enable or restrain what is ‘said, thought and done’.Footnote 40 In other words, observers are compelled, through the visual patterns, extrapolated visual themes, and contextual practices of visuality evoking affective connections, to consider what they see in visual demonstrations of weapons systems as logical, truthful, and commonsensical representations of military force.Footnote 41 As such, actors engaging in demonstrations visually claim epistemic authority in the domain of warfare. They present themselves as knowledgeable and trustworthy experts by visually showing technical (how weapons systems are prepared, operated, and controlled in a practical manner) and military (how weapons systems are used to attain specific strategic, tactical, or operational goals) expertise over the weapons systems on display.Footnote 42
The evolution of warfare based on ‘networked information and virtual technologies’Footnote 43 – in the US well underway since the Cold War, as Edwards shows in The Closed World Footnote 44 – has spurred the emergence of computerised military command and control software programs. These new military capabilities, of which AI- and ML-based decision-support systems are now ‘advanced as the promissory solution to automating data analysis and reclosing the world’,Footnote 45 are inherently invisible and abstract. They are virtual. Engaging in demonstrations to profess the possibilities and workings of such products thus becomes more difficult. Virtual military capabilities can only be visualised and concretised by virtual means, that is, through digitally built virtual environments in which the practical functioning of AI- and ML-based decision-support systems can be shown. Concretely, this means that real-life video images are interspersed with computer animations and digital renders, as we see in the virtual demonstrations of TITAN and Lattice.Footnote 46
This virtual method of demonstration bolsters the epistemic authority claims that the developers of these products – predominantly commercial actors, as we showed in the previous section – make. Through ‘high tech’ visualisation techniques, digital enhancement, and discursive contextualisation, demonstrating actors construct a virtual environment that reflects their vision of the future of war. It is a vision
of utopian war, identifying a future in which advanced technology makes the processes of military decision-making akin to bouncing a few requests for intelligence or courses of action off an AI-enabled chat system. It envisions complete knowledge of the enemy, the capacity for friendly forces to act unburdened by opposition, and the ability to rapidly generate a list of reliable plans of attack in only seconds.Footnote 47
By centrally placing their own AI- and ML-based decision-support systems in this virtual representation of future war, and by visually demonstrating their products in a technically impressive and military useful manner, these companies claim definitive knowledge about winning future conflicts: in the next war, quick and precise military decision-making driven by machine intelligence will be the only way to defeat one’s geopolitical rivals – or so they suggest.Footnote 48 Algorithmic warfare thus becomes transformed into a strategic imperative for survival.
Moreover, virtual military demonstrations of AI-based decision-support systems and their discursive contextualisation allow commercial actors to produce knowledge that is framed as technical expertise but in reality is misrepresenting the complexities of warfare while pursuing commercial and political objectives. The construction of a virtual environment in which AI- and ML-based programs steer the hostilities of a distant war fought by ‘massive amounts of attritable systems’ instead of human soldiersFootnote 49 reflects Der Derian’s notion of ‘virtuous war’. By promoting visions of sanitised, precise, and ‘bloodless’ violence, the virtually constructed promise of virtuous war ‘cleans up the political discourse as well as the battlefield’.Footnote 50 The technology companies that construct AI and ML technologies and demonstrate them in a digitally enhanced, virtual environment, aim to fulfil the ‘resilient fantasy’Footnote 51 of algorithmically steered command and control, with little human intervention.Footnote 52 Warfighting as a clean and efficient practice, objectified by the datafication and mechanisation that AI and ML technologies promise, is thus also framed as an ethical necessity to protect Western democracies.Footnote 53
Technology companies have long constructed broader discourses on military AI development as the strategically and morally right thing to do, which are further reinforced by these virtual demonstrations. As Palantir’s CEO Alex Karp and his colleague Nicholas Zamiska suggest in The Technological Republic, ‘this new era of advanced AI … provides our geopolitical opponents with the most compelling opportunity since the last world war to challenge our global standing’, adding that the pursuit of innovation, experimentation, and collaboration between industry and the military would ‘safeguard the legitimacy of the democratic project itself’.Footnote 54 Similarly, Anduril’s founder Palmer Luckey argued in a TED talk that ‘there is no moral high ground’ in refusing to develop and use AI, as it would be irresponsible ‘to write off an entire category of technology, and in doing so, tie our hands behind our backs and hope we can still win’.Footnote 55 In another interview, Luckey explained that Anduril is ‘not just a neutral company’, but that it would ‘take sides … and fight for the things that our country values, that our allies around the world have in common’.Footnote 56
In the following section, we conduct a detailed analysis of the virtual demonstrations of Palantir’s TITAN and Anduril’s Lattice, showing how these companies claim epistemic authority to frame algorithmically mediated violence as a strategic and moral imperative for Western democracies’ survival.
Virtual demonstrations in practice: Palantir’s TITAN and Anduril’s Lattice
Methodological reflections: Knowledge production about AI through the virtual
Following STS scholarship, we treat technologies as inherently political and social, which involves exercising reflexivity about the role of various actors in producing knowledge about technological developments, even our own. This is especially crucial in the sphere of AI, as current debates are often based on wrong or incomplete assumptions, but still generate a high level of political, societal, and media interest.Footnote 57 Uncertainties are exacerbated by the low degree of observability of AI and ML technologies, ongoing definitional discussions, and, in a military context, secrecy surrounding AI-related projects as a result of the strategic importance attributed to them. Given that secrecy is a baseline condition for studies on security and military technology, especially in a developing field like military applications of AI, publicly available demonstrations, whether from defence ministries or tech companies, serve as a vital source of knowledge production.Footnote 58
Therefore, reflecting on our roles as researchers analysing virtual military demonstrations, we recognise the challenges in verifying the information presented by demonstrating actors. Actors have the possibility to stretch the meaning of the term ‘AI’ and present exaggerated statements and claims about their products or capabilities. As Suchman argues, ‘while technologists understand “AI” as a convenient (and highly saleable) shorthand for a suite of statistically based techniques and technologies for automating data analysis, the term falsely implies something singular and unprecedented’.Footnote 59 We therefore highlight the importance of studying virtual military demonstrations from a critical and reflexive perspective. This is also because presenting the information communicated via virtual demonstrations as ‘truths’ without critical reflections risks reinforcing certain types of expertise in the debate on military AI. As noted by STS scholars, what often appears as technical knowledge can be shaped by ideological discourses and political or commercial interests of certain actors, such as tech companies.Footnote 60 Meanwhile, it is common for policymakers to rely on companies’ participation in various initiatives or debates to gain an understanding of the topic, considering the complexity of AI development. For instance, Bode and Huelss find that the European Union ‘grants expertise on military AI selectively to a small group of tech company representatives who then shape regulation formally … and informally’.Footnote 61
This does not imply that corporate actors necessarily have a direct influence on policies or debates in the sphere of AI, but rather that they ‘construct and perform particular discourses and practices of military AI’ and warfare – including via virtual demonstrations.Footnote 62 These discourses and practices contribute to reinforcing certain perceptions of what is considered ‘appropriate’ or ‘normal’ in warfare.Footnote 63 However, they are often treated as part of ‘objective’ technical expertise. Researchers using virtual demonstrations without critically reflecting upon the types of discourses and practices in these performances risk amplifying this type of ‘expert’ authority in the debate on AI. Thus, while arguing for the need to examine companies’ virtual demonstrations, we also encourage other researchers studying this phenomenon to consider how AI technologies are advertised, especially regarding the performative aspect of demonstrations and the types of security perceptions that these performances can reinforce or contribute to.
Data collection and analysis
Our analysis is based on audiovisual documents available via open-access sources. We collected videos featuring demonstrations of TITAN and Lattice via Palantir’s and Anduril’s official websites and social media channels.Footnote 64 We conducted a thematic visual analysis following an interpretive research design, where we focused on ‘meanings and meaning-making practices of actors in a given setting’ rather than arriving at generalised statements based on two companies’ demonstrations.Footnote 65 For each video we assessed the following guiding questions: what is being demonstrated, and how? Who appears in the video? How are humans (if there are any) portrayed? How are software and hardware elements portrayed? How is the environment visualised? How is war visualised? Does the video mention adversaries, and if yes, who and how?
By responding to these questions, we observed the objects, actors, and practices depicted in the virtual environment created in the demonstrations.Footnote 66 This approach is appropriate for our analysis because we do not focus on the quantitative aspect (e.g. how many times something appears in the videos). Rather, we are interested in broad themes revealed by our observations. We therefore created a ‘recursive and iterative back-and-forth’ between our theoretical assumptions and the empirical analysis.Footnote 67 To complement our visual thematic analysis, we collected videos of other systems promoted by Palantir and Anduril.Footnote 68 Our analysis of these videos followed the same process, as we aimed to uncover common themes between demonstrations of various products.
Moreover, to provide context for the videos, we collected official media reports, opinion pieces, and interviews written or given by these companies’ representatives. Alex Karp and Palmer Luckey, the respective founders of Palantir and Anduril, actively participate in public debates on military uses of AI. Representatives of both companies often attend exhibitions and forums, setting up booths to visually demonstrate and sell their products. We therefore also refer to reports from such events to analyse Palantir’s and Anduril’s demonstrations within these settings. While we treat virtual demonstrations as primary sources of our analysis, these other documents provide complementary discourses which are helpful to make sense of the perceptions shaped via virtual military demonstrations.
Palantir’s TITAN
Palantir Technologies is a US-based public company specialising in software platforms and supplying its products to various institutions and companies across areas such as health, financial services, and commerce. Named after the future-seeing stones featured in The Lord of the Rings mythology, the company also provides surveillance and intelligence technologies to various agencies of the US government and police departments.Footnote 69 In the sphere of defence, Palantir has held contracts with the US and UK armed forces for many years, and since Russia’s full-scale invasion of Ukraine in February 2022, the company has been particularly active in supplying AI-based software tools to the Ukrainian armed forces.Footnote 70 In 2024, the firm’s co-founders Peter Thiel and Alex Karp signed a strategic partnership with Israel’s Defence Ministry, without revealing the exact technologies that are part of the agreement.Footnote 71
In 2022, the US Army awarded Palantir and RTX Corporation $36 million each to build prototypes of their respective plans for the TITAN intelligence ground system.Footnote 72 Subsequently, Palantir was selected as the provider for the system, with a team involving several partners: Anduril Industries, Northrup Grumman, L3Harris, Pacific Defense, Sierra Nevada Corporation, Strategic Technology Consulting, and World Wide Technology. In July 2024, the first prototype was delivered to the US Army’s Joint Base Lewis–McChord. As the army’s website states, the main driver of this project is ‘the need for a next generation intelligence, surveillance and reconnaissance system that rapidly processes sensor data … to provide real-time intelligence support for targeting and situational awareness’, as well as to enhance the capability ‘to support multi-domain operations’.Footnote 73
In Palantir’s main video advertising TITAN, the platform appears as a black/grey ground vehicle resembling a truck, visualised in front of an abstract black background.Footnote 74 Much of the video is dedicated to a technical expert, Palantir Forward Deployed Engineer Claire Reimer, explaining and describing TITAN’s features, accompanied by digital animations. For instance, the claim that the system ‘delivers AI at the edge that can be tailored to the mission ensuring soldiers have the latest and most relevant models wherever the mission takes them even as the threat landscape evolves’Footnote 75 is illustrated with animations and shots of various terrains: desert, arboreal, tundra, and urban. These are all empty, without any humans or buildings except for the urban environment, which features destroyed buildings and roads. Similarly, when Reimer says that TITAN ‘has access to and can manage data from space, high altitude, aerial and terrestrial sensors’,Footnote 76 the audience can see shots of a desert-like environment, with only nature and no humans present. Through the omissions of both combatants and civilians, warfare is portrayed as something that can be controlled and classified into numerical categories, and analysed via the algorithms integrated into TITAN.
Viewers also see the TITAN vehicle in front of a black background, surrounded with white interconnected dots, as Reimer says that what sets TITAN apart is ‘the decisive advantage that the software-enabled capabilities will give to soldiers’, such as ‘faster, deeper sensing’ and ‘more accurate targeting enabled by cutting-edge AI and machine learning’.Footnote 77 Presumably, the white network of dots surrounding TITAN is a visual representation of the AI and ML capabilities. This visual strategy obscures the experience of warfare as an inherently chaotic, complex, fast-changing, and deeply harmful human practice. War is deliberately represented as a clean, controllable, and precise business, in which humans and their experiences are moved to the background.
Around halfway, the video changes to a software demonstration of how the backend of the system would function ‘in action’. This demo shows what happens on the user’s end, especially the interface which allows configuration, as well as target identification, tracking, and selection. It displays screens with running data, maps, and satellite imagery. Viewers can see tabs such as ‘overview’, ‘data connections', ‘power management’, ‘hardware management’, and ‘software management’ and the cursor switching between them, scrolling through different pages, and opening various other tabs. Reimer’s voice, in the meantime, contextualises the images, noting that ‘the system uses AI to reduce the sensor to shooter timeline by automatically refining tracks and target locations’ and that the data fused from multiple sources ‘creates more complete, higher confidence targets for users to review’.Footnote 78
Overall, the physical, material appearance of the TITAN vehicle does not feature as much as the software part. This corresponds to how the system is described by Palantir’s representatives in other contexts. Chief Technology Officer Shyam Sankar said that the capabilities of this vehicle were ‘derived and designed around the software’, emphasising the AI and software side, not the hardware.Footnote 79 In another interview, he noted that while some may think of TITAN as a ‘ground station on wheels, we [Palantir] think of it as the first AI-defined vehicle’, adding, ‘If TESLA made your car software design, we’re making your weapon system AI-defined’.Footnote 80 Palantir’s booth at the 2023 Consumer Electronics Show, where visitors could find a dark-grey demo version of TITAN, prominently featured the slogan: ‘the future of hardware is software’.Footnote 81
These demonstrations also reflect the broader discourse promoted by Palantir: that current armed conflicts show the superiority of data and software over hardware. In an op-ed for the Washington Post, Karp and Zamiska wrote: ‘Now software is at the helm, with hardware – the drones in Ukraine and elsewhere – increasingly serving as the means by which the recommendations of AI are carried out’.Footnote 82 Karp also often connects the importance of developing software to broader political statements, especially in relation to what he perceives to be the main foreign policy objectives of the US and its allies. Writing for the New York Times in 2023, he argued that ‘the ability of free and democratic societies to prevail requires something more than moral appeal. It requires hard power, and hard power in this century will be built on software.’Footnote 83 Meanwhile, Sankar mentioned that ‘software is the most important weapon system’, a slogan that also appears on Palantir’s website page advertising its Gotham platform.Footnote 84
Palantir has also been advertising its other decision-support systems, such as Gotham, Gaia, and Artificial Intelligence Platform (AIP).Footnote 85 Similarly to the virtual demonstration of TITAN, the videos promoting these systems typically include a simulation of the software, screens with running data, maps, satellite imagery, and interfaces resembling generative AI software or chatbots, such as OpenAI’s ChatGPT. In addition to this emphasis on software, there is a clear focus on the perspective of the user(s) of these systems, i.e. the humans who would employ them as part of military operations. Everything is advertised as making the various steps of targeting decision-making, especially intelligence analysis, more efficient, comprehensive, and well-organised, and thereby reducing security threats.
One video demonstrating Gotham, for instance, features a fictional scenario of potential tensions in the South China Sea, where the US and its allies use Palantir software to ‘see the full picture and make tactical, operational, and strategic decisions’.Footnote 86 In the scenario, a military incident or escalation between China and the US is avoided, with Palantir’s representative claiming that the technology helps ‘to make decisions at speed, and in the process, make the world a safer place’.Footnote 87 Meanwhile, the Gotham: Europa demonstration begins with short videos associated with various threats (e.g. a fire, destroyed buildings, crime, police troops, and lights on police cars) and claims that the software is ‘built to deal with the chaotic nature of the modern threat landscape’.Footnote 88 Such examples support and reinforce narratives of tech solutionism that portray AI as a ‘fix’ to solve broader societal issues, whether in the sphere of war or law enforcement.Footnote 89 These visual narratives even promote the idea that AI and ML adoption by the military is necessary to safeguard Western democracies. This is, for instance, clearly illustrated in a short video of TITAN released in April 2025, which mimics the trailer of a film by featuring dramatic music, images of military hardware without any human presence, and titles such as ‘TITAN enables the warfighter to defend the West’.Footnote 90
Further, Palantir’s AI-based decision-support systems are portrayed as isolated from contexts of actual battlefields – a common theme in visual representations of AI in the military domain.Footnote 91 The realities of warfare, which is a complex and messy phenomenon, are obscured with digitally enhanced imagery which is not necessarily unrealistic (i.e. it does not recall science fiction), but which is abstract. The difficulties of military decision-making on the ground are overshadowed by visuals of order, organisation, efficiency, and ‘accuracy’.Footnote 92 The demonstrations represent targets as numbers and symbols, a collection of data in the backend of a system where the user clicks on some icons in a simple and streamlined process. Potential errors or malfunctions, as well as humans and infrastructure, whether civilian or military, affected by the targeting process are not visually (or discursively) acknowledged. As aptly reported by journalist Caroline Haskins describing Palantir’s booth at the 2024 AI Expo for National Competitiveness in the US:
They also used countless euphemisms for bombing and death. The woman described how Palantir’s Gaia map tool lets users ‘nominate targets of interest’ for ‘the target nomination process’. She meant it helps people choose which places get bombed … So, Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment.Footnote 93
In sum, the virtual demonstration of TITAN, as well as Palantir’s other demonstrations and broader discourses, contribute to reinforcing the perception that AI- and ML-based decision-support systems would make war more clean, controllable, and precise, and thereby give a ‘decisive advantage’ to the US as part of asserting its dominance on the world stage.Footnote 94 By featuring technical experts such as engineers in the demonstrations, Palantir claims epistemic authority, knowledge and expertise over future wars and how to win them. Moreover, Palantir’s virtual demonstrations of military decision-support systems reinforce the company’s claim to moral authority and the contention that the pursuit of superiority and deterrence through military applications of AI is the ethically right thing to do because it is the key to stability and peace. As Karp claimed ‘We are the peace activists’, referring to Palantir and other producers of military applications of AI.Footnote 95
Anduril’s Lattice
Our second illustration assesses Anduril’s Lattice program. Palmer Luckey, who sold his virtual reality (VR) company Oculus to Facebook for $2 billion in 2014, founded the defence tech startup in 2017. After initially selling surveillance technology – a combination of VR and AI software programs and hardware gear – to Donald Trump’s Department of Homeland Security with the objective of constructing a ‘virtual border wall’ on the southern US border,Footnote 96 Anduril shifted its focus to the US DoD to sell military products. The defence tech startup has since been chosen by the US Special Operations Command as its Systems Integration Partner, and was also selected to build the US Air Force and Navy autonomous fighter jet, the so-called Collaborative Combat Aircraft, or CCA.Footnote 97 It also supplies several uncrewed and autonomous systems to the US military and its allies.Footnote 98
But Anduril’s flagship product is its software. The digital border wall constructed by Luckey and his team was heralded as a breakthrough by border guards and the US government, because it not only provided hardware such as autonomous surveillance drones, VR headsets, and sentry towers with an intricate ecosystem of sensors, but also a software ‘system of systems’ that could gather, process, and analyse all of the available data.Footnote 99 This software program is called Lattice, which, according to Anduril, is ‘an open software platform capable of being used for a variety of missions and industries’ in both the commercial and military domain, ‘designed to be sensor, network, and system agnostic’. It ‘takes data from disparate and distributed sensors, feeds, and systems and moves this data into a single integration layer’, which is then processed through AI and ML techniques that ‘filter high-value information to users’.Footnote 100
Today, Lattice for Mission Autonomy is the operating system for Anduril’s military products. It serves as an ‘AI-enabled software integration and network layer across a variety of legacy systems in use by our customers [which] now automates hundreds of deployed robotic systems’.Footnote 101 On the Anduril website, Lattice is prized as a useful software product for ‘securing land and maritime borders’, ‘inspecting and securing critical infrastructure’, ‘detecting and responding to wildfires’, and ‘search and rescue’.Footnote 102 Its most controversial application, however, is as a military decision-support system that is part of the US DoD Joint All-Domain Command and Control (JADC2).Footnote 103 Described as ‘simple, scalable, [and] extensible’, Lattice is said to be the crucial operating system to keep up with ‘one of the most important modernization priorities for the DoD to confront challenges posed by strategic competition’ and promises to leverage ‘machine intelligence to accelerate the closing of complex kill chains’.Footnote 104 Anduril has produced several audiovisual, digital spectacles that showcase the professed capacities of Lattice, which, we argue, portray practices of algorithmic warfare as a strategic and moral imperative for Western democracies.
The main demonstration showcasing Lattice for Mission Autonomy is an eight-minute-long YouTube video, uploaded by the Anduril Industries channel and shared on social media and their own website.Footnote 105 By way of introduction, Christian Brose, Anduril’s Chief Strategy Officer (CSO), warns the viewer that ‘we’ are ‘losing our ability to deter great power conflict’, because ‘our adversaries’ are building large numbers of advanced weaponry that can ‘find, target, and destroy our traditional forces’. While he is talking, the video shows aerial shots of empty high seas, rocky landscapes, and cloudy skies. The stern-looking CSO asserts that the solution is simple: ‘our goal must be affordable mass: the ability to produce, operate, and sustain massive amounts of lower-cost, more-intelligent, more attritable military systems’. He stresses that ‘this is all about autonomy, and that will be delivered more than anything else by software’.Footnote 106 The camera then pans to a laptop, presumably running the Lattice software program, where the viewer sees a digitally augmented map of the UK, with red and blue pins in the form of warplanes and tanks, manoeuvring in south-west England. Brian Schimpf, the company’s CEO, explains the programs’ capabilities of ‘sensor fusion, target identification, intelligent networking and command and control’, all integrated into one. While the video shows interfaces of Lattice tracking and identifying various individuals and vehicles with mid-wave infrared cameras from a birds-eye perspective, Schimpf boasts about testing ‘the concept’ earlier on other missions ‘like border security and air defence’. Now, however, ‘Lattice is automating the operations of hundreds of robotic systems deployed in tactical environments around the world’.Footnote 107
But Schimpf claims this is only the beginning. Over the past four years, Anduril developed an update to Lattice, called Lattice for Mission Autonomy, which enables ‘human operators to actually interact and fight with teams of robotic vehicles to conduct dynamic and distributed operations in a highly contested environment’. Viewers see images of soldiers carrying and mounting uncrewed aerial vehicles, preparing them for their flights, while close-ups of the program’s interface show multiple lines and dots flickering in all sorts of directions on the screens. Some seconds later, the uncrewed vehicles have taken off and are manoeuvring above a barren wasteland. Schimpf’s voice-over points out that Lattice for Mission Autonomy is essential to ‘make sense of the battlespace: identify threats and objects of interest, enhance survivability, orchestrate complex manoeuvres, and synchronise the delivery of effects’.Footnote 108 As he is pointing out these crucial capabilities, the video again shows the Lattice interface, with tabs and pop-up windows showing up, and small red and blue dots, representative of missiles and airplanes, briefly entering on screen.
The second part of the video features Kevin ‘Shaka’ Chlan, a former US Navy fighter pilot, who works for Anduril as a business developer. The demonstration zooms in on the CCA and autonomous vehicles, which can be powered and supported by Lattice for Mission Autonomy. ‘In military operations’, Chlan starts off, ‘there is no extra credit for autonomy. Outcomes matter most.’ As the video shows a black-and-white computer render of aircrafts connected by a white dotted line, flying between unpopulated mountain peaks and hunting a red-dotted target, Chlan argues that a ‘team’ of crude aircrafts and an autonomous CCA can provide ‘affordable, distributed mass’ to compete with ‘near-peer adversaries’. But, according to Chlan, this is only workable if ‘the complexity of such operations [is driven] down for the human operators’. The ideal solution is clear. Lattice for Mission Autonomy brings together ‘the platforms, the piloting, and the payloads, so the groups of robotic systems can deliver mission outcomes autonomously under human supervision’.Footnote 109 What this ‘human supervision’ precisely entails, is – apart from a military officer behind a desk working on a laptop – not visualised in the video.
Towards the end of the demonstration, viewers see more computer renders of different operator teams working in close cooperation with augmented and virtual reality projections. Meanwhile, Chlan contends that Lattice ‘fundamentally changes how humans and machines will work together’, as it ‘provides an adaptive digital platform for warfighters to engage with autonomous systems’ across the entire mission cycle. Through Chlan’s presence and interventions, Anduril aims to reflect military experience and know-how. For the grand finale of the demonstration, the droning electro soundtrack picks up steam and Brose’s voice-over returns, while small groups of uncrewed vehicles manoeuvre across the familiar empty seas, skies, and landscapes. Anduril is driving ‘a major shift in defence capability’, Brose emphasises: a shift ‘from a manpower intensive, hardware-defined military, to one that is software-centric and enhanced by mission autonomy’. According to Brose, ‘this change is not optional, it is essential. It could mean the difference in the future between deterrence and conflict, winning and losing, and we don’t have much time.’Footnote 110 The final shot of the demonstration shows Lattice’s interface again, where the digitally enhanced map of the UK zooms out to reveal a view of the entire globe.
The virtual demonstration illustrates Anduril’s efforts to depict algorithmically mediated military violence as a technical possibility that is military efficient and necessary to ensure the survival of the US and its democratic allies in the coming years. The pop-up windows, tables, calculations, lines and dots, brought together on screen within a neatly organised user interface, strengthen perceptions of control and calculability. Reinforced by the employees’ voice-overs, evoking a sense of technical expertise and military experience, the virtual demonstration of Lattice represents the use of algorithmically mediated violence as a precise and limited action. But the true realities and experiences of war are absent.
The Lattice demonstration, like TITAN’s, does not show actual battlespaces or incoming threats. It does not show how the software system would ‘enhance survivability’ of the uncrewed vehicles, or how complex manoeuvres are conducted. In the video, the battlefield is obfuscated and portrayed as an empty space, a virtual environment in which militaries can go about their business in whatever way they like, which contributes to the perception of war as a clean, controllable, and precise endeavour that has no unexpected twists and turns, no mistakes or misjudgements. Moreover, in Lattice’s virtual demonstration, targets are not discursively or visually acknowledged. There seem to be no harmful consequences to infrastructures or individuals, whether military or civilian. However, this is not representative of contemporary warfare. As the current armed conflicts in Ukraine and Gaza demonstrate, fighting occurs in densely populated areas, with civilians and civilian infrastructures closely entangled with military targets. The perception of waging combat coordinated by Lattice (or TITAN) as clear and clinical, which is also reinforced in other Anduril demonstrations,Footnote 111 is thus distant from the lived experiences and realities of warfare.
Following Brose’s introductory statements, viewers would expect Lattice to foremost provide a solution to reinstate American deterrence. The core message of the virtual demonstration, however, centres on the idea that ‘massive amounts of lower-cost, more-intelligent, more attritable military systems’ will provide a clear battlefield advantage, making sure that Anduril’s consumers ‘never face a fair fight’.Footnote 112 The virtual demonstration of Lattice’s capabilities shows its alleged effectiveness in a limited and precise fight. It remains unacknowledged, both discursively and visually, how ‘affordable mass’ would deter ‘near-peer adversaries’. Anduril’s demonstration constructs the perception that the next war – allegedly right around the corner – should be fought and won, rather than deterred.
This seems to be a broader narrative within the startup. Anduril, in Tolkien’s mythology also known as the ‘the Flame of the West’, published an elaborate mission statement in 2022 titled ‘Rebooting the Arsenal of Democracy’, a not-so-subtle nod to Roosevelt’s famous speech in a 1940 radio broadcast.Footnote 113 The 50-page booklet, which begins ominously with the question ‘Xi Jinping believes he can out-innovative American defense. Is he right?’, explains why software is the strategic key to enable ‘affordable mass’, and how the US government should radically alter its defence procurement procedure to allow innovation in the defence sector to flourish.Footnote 114 It features several telling visuals, such as a computer screen displaying the words ‘ > Hello, War!’. These visuals reinforce the perception that Western democracies are under immediate threat, presumably from China. Anduril’s AI- and ML-enabled decision-support systems are portrayed as the strategic solution to win the coming wars, thereby framing investments in these capabilities as a moral responsibility for Western governments.
Luckey has mentioned in several public interventions that he ‘specifically got into this business because [he] wanted to change the way that military buys technology’.Footnote 115 When a reporter asked him whether he was ‘building products that the government does not even know it needs yet’, Luckey answers:
Very often. It’s pretty rare that we work on something that is consensus in the government, where there’s widespread belief that what we’re doing is the right solution to the problem. Often we’re building things that they’ve written off as not feasible or not viable. There was a lot of scepticism about applying artificial intelligence to defence … a lot of scepticism about artificial intelligence in general. ChatGPT was one of the most helpful technologies to us because it helped convince people that AI can do things they didn’t believe computers could do.Footnote 116
Such statements demonstrate Anduril’s objective and ability to influence the kind of products the US DoD invests in, and thus also how decision-makers perceive algorithmic warfare as a strategic and moral imperative. Anduril’s focus on low-cost, high-intelligence, attritable systems, and on machine speed and autonomy in the operating software systems such as Lattice, seems to have convinced (at least part of) the US defence tech establishment, as several successful funding rounds and lucrative government contracts demonstrate.Footnote 117 But this means that critical questions on why the US and its allies should have ‘cost-effective’ means of killing and targeting in the first place, will move further to the background.
Anduril’s virtual demonstrations of Lattice, just as Palantir’s virtual demonstrations of TITAN, depict contemporary warfighting foremost as a bloodless ‘virtuous’ battle between uncrewed vehicles and unidentified red dots on a software program’s interface. In reality, combat still involves people. The detrimental consequences of machine speed killing and ‘cost-efficiency’ in the targeting process are clearly visible in Gaza, where over 50,000 people have been killed by Israel’s indiscriminate bombing campaign in less than two years.Footnote 118 The surge in targets – whether legitimate or in compliance with the rules of engagement of the mission or not – leads to a dramatic increase in civilian deaths and destroyed infrastructure. Russia’s ongoing invasion of Ukraine equally epitomises the catastrophic effects of war for the populations and lands of the involved parties. By disconnecting the software programs that will increasingly oversee and coordinate contemporary military operations from the devastating consequences of the actions they propose, the videos demonstrating TITAN and Lattice represent algorithmically mediated military violence as a strategic and ethical necessity, but also as a phenomenon detached from the lived experiences and realities of war – a disturbing evolution that warrants further public and academic reflection.
Conclusion
Demonstrations of military capabilities have been a long-standing feature of international politics. Apart from their direct strategic and commercial effects, these public performances also shape how war and security are understood and perceived. The recent embrace of algorithmically mediated violence by states, militaries, and companies is a novel iteration of humanity’s long-standing technological fetishisation of the virtual.Footnote 119 This evolution necessitates critical investigations of how virtual demonstrations of military decision-support systems – enabled by advances in AI and ML – construct and entrench certain perceptions of security and war. It is crucial to investigate the normative roles of private technology companies, given their rising influence on defence innovation in the sphere of AI and ML.
In this article we argued that virtual demonstrations conducted by commercial actors are international practices that allow tech companies to claim epistemic authority on the future of warfare by emphasising their technical and military expertise. They allow them to entrench knowledge production regimes by portraying ‘virtual war’ as ‘virtuous war’ – a clean, precise and efficient enterprise that obscures the realities of warfare while reinforcing the political message that the pursuit of AI is the strategically and ethically right thing to do for Western democracies. Focusing on the empirical illustrations of demonstrations of Palantir’s TITAN and Anduril’s Lattice, we assessed how private companies are becoming increasingly influential in shaping these security perceptions.
Our analysis centred around four main observations: first, the virtual demonstrations we analysed, together with their accompanying discourses, obscure the lived realities and experiences of war and violence, representing algorithmically mediated combat as an ethically unproblematic business. Second, the demonstrations emphasise the primacy of software over hardware in enabling militaries to win future wars. In this way, they enable companies’ claims to expertise and knowledge about the future of war and how it should be conducted. Third, there is a discrepancy between the virtual demonstration, which represents war as something to be fought and won, and the discursive, which represents war as something to be deterred. Finally, we see an overarching narrative in which these companies promote themselves and their AI- and ML-based decision-support systems as the silver bullet for the survival of Western democracies, presenting it as a moral imperative in the global struggle against competitors. Based on these observations, we argued in this article that military violence, enabled by AI and ML, is framed by the virtual and discursive representations of comprehensive military decision-support systems as a clean, controllable, and precise business, strategically and ethically necessary to ensure the enduring primacy of the US and its Western allies in the international system.
The increasingly prominent role of private tech companies, their virtual demonstrations, and their claims to epistemic authority, have important social and political consequences. As more citizens of ‘Western’ democracies observe such virtual demonstrations – which is not unlikely given their widespread circulation on social media – the thresholds that decision-makers experience to engage in violent military practices might become severely lowered. Given the broad (online) circulation of these demonstrations and the narratives constructed by technology companies, the visual depiction of war as a virtuous business is also likely to numb a significant part of the public when it comes to the horrifying consequences of organised violence. As Der Derian notes: ‘unlike other forms of warfare, virtuous war has an unsurpassed power to commute death, to keep it out of sight, out of mind. In simulated preparations and virtual executions of war, there is a high risk that one learns how to kill but not to take responsibility for it. In virtuous war we now face not just the confusion but the pixillation of war and game on the same screen.’Footnote 120 Future research could further uncover the affective responses of different audiences to these virtual demonstrations.
Corporate actors are becoming increasingly influential in the military domain. These companies present themselves as expert actors in national security and defence. The demonstrations analysed in this article feature individuals such as engineers or former military personnel now working for Palantir or Anduril, reinforcing the companies’ claims to epistemic authority in the sphere of war. Displaying the involvement of former US Navy fighters in the virtual demonstrations can contribute to the perception that the companies know what the military needs. However, what the involvement of individuals such as engineers also depicts is an increased entanglement between the technology industry, economic profit, and company interests together with the military domain and warfare. As part of this increased ‘platformisation’ of warfare, more research is needed to unpack the relationship between companies’ commercial interests in developing and supplying AI decision-support platforms and their broader political role, including their claims to authority and expertise in the military domain.Footnote 121 The increased involvement of the technology industry in defence and security matters raise critical questions with regards to democratic oversight, transparency, and accountability, which need to be scrutinised further.
Finally, there are also severe ethical implications to these virtual demonstrations performed by private tech companies. Depictions of tech solutionism, the categorisation of the complexity of warfare into codes and numbers, and the primacy of software, among others, obscure the impact of warfare on humans, especially civilians. Such reflections have been explored in scholarship and analysis surrounding drone warfare, for example.Footnote 122 However, the increased scope and scale of integrating AI and ML technologies into military targeting processes, including as part of decision-support systems, are raising additional ethical, humanitarian, and security concerns. Reported uses of AI-based decision-support systems, particularly as part of Israel’s indiscriminate bombing campaign in Gaza, highlight that AI and ML technologies are part of the realities of warfare affecting civilians who bear the brunt of death and destruction.Footnote 123 By reproducing visions of algorithmic warfare where AI- and ML-based decision-support systems are morally acceptable, technical solutions that empower military personnel in their tactical decision-making, companies such as Palantir and Anduril cast a shadow not only on the suffering and destruction, but also on the human aspects of war.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/eis.2025.10015.
Acknowledgements
The authors want to thank the members of the International Politics research group at the University of Antwerp, and specifically the PLATFORM WARS team for their valuable suggestions on earlier drafts of this article. We are also grateful for the comments we received during the presentation of our manuscript at the 2024 EISA PEC conference in Lille. A final word of thanks goes to the two anonymous reviewers and the editors of the European Journal of International Security for their generous and constructive feedback, which helped develop the presented argument more clearly.
Funding
Anna Nadibaidze’s contribution to this article was supported by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852123, the AutoNorms project).
Disclosure statement
No potential conflict of interest was reported by the author(s).
Robin Vanderborght is a doctoral researcher at the University of Antwerp, Belgium. His PhD research focuses on the concept of strategic stability and how it is made meaningful through different international practices. His research domains and interests are critical security studies, science & technology studies, visuality and pop culture in IR. His work has been published in journals such as Critical Studies on Security and Peace Review.
Dr Anna Nadibaidze is a postdoctoral researcher at the Center for War Studies, University of Southern Denmark, working on the Independent Research Fund Denmark-funded HuMach project as well as the European Research Council-funded AutoNorms and AutoPractices projects. She holds a PhD in Political Science from the University of Southern Denmark. Her research explores AI technologies in international security and global governance of AI in the military domain. Her work has been published in journals such as Contemporary Security Policy, Ethics and Information Technology, and Journal of International Relations and Development.