19 Nov Demonstrating the Future of War: Tech Companies and Claims of Epistemic Authority on Military AI
[Dr Robin Vanderborght is a researcher in international politics at the University of Antwerp, Belgium
Dr Anna Nadibaidze is a postdoctoral researcher at the Center for War Studies, University of Southern Denmark]
On April 8, 2025, Palmer Luckey, the founder of defence technology company Anduril Industries, steps on the stage of TED2025 to deliver a talk on the military use of artificial intelligence (AI). Luckey asks the audience to imagine a full-scale Chinese surprise attack on Taiwan. He paints a bleak picture: ballistic missiles rain down on the island while large numbers of amphibious assault ships and aircraft carriers are deployed, cyber-attacks cripple Taiwan’s digital and physical infrastructure, and long-range missiles shred defences. In the fictional scenario conjured up by the founder of one of the major defence startups focused on military autonomy and weaponized AI, the US attempts to respond but falls short due to a lack of numerical capabilities.
Luckey continues his presentation with imagining an alternative vision of a future war between China and Taiwan, one in which his startup’s products play a central role. He envisions fleets of AI-driven systems, coordinated through Anduril’s Lattice platform—an AI-enabled software that allows its users to, allegedly, “deploy millions of weapons without risking millions of lives”—fending off a Chinese invasion. He suggests that “by deploying autonomous systems at scale […] we prove to our adversaries that we have the capacity to win. That is how we reclaim our deterrence”.
This is not merely a rhetorical strategy or a form of advertisement for Anduril’s products. As we demonstrate in our research on Anduril and Palantir (another influential US-based tech corporation), defence tech companies employ virtual demonstrations—digitally constructed military product presentations—to portray themselves as deeply knowledgeable on, and familiar with, the role of AI in the future of war. Through neatly visualised, digitally enhanced spectacles, often accompanied by scenarios such as the one narrated by Luckey, companies construct a virtual environment that reflects their own vision of AI in war.
It is a vision, as others have pointedly described elsewhere, of ‘utopian war’, where ubiquitous data sources provide omniscient comprehension of the adversary’s motives, moves, and objectives, and software platforms integrate algorithmic decision-making and large-language models to provide clean and efficient courses of action for its users in seconds. We argue that this vision, promoted by tech companies’ virtual demonstrations of AI-based decision-support systems (AI DSS) used in military targeting, involves political, legal, ethical, and societal implications that deserve to be explored and debated, especially with the increased influence of tech companies in defence in both the US and the European contexts.
Tech Companies Claiming Epistemic Authority via Virtual Demonstrations
Today, defence tech companies regularly produce and circulate sleek virtual military demonstrations of their AI-based products. Anduril and Palantir are perhaps the most visible in the online environment, due to their marketing campaigns on LinkedIn and other social media platforms. But other military-oriented startups follow in their footsteps. US-based companies working on weaponized applications of AI, such as Shield AI, Saronic Technologies, and Rebellion Defense, but also European startups like Helsing, Tekever, or Quantum Systems, produce comparable promotional material. These videos and images are rife with similar glossy visuals, droning electro soundtracks, and experts—both technical and military—explaining or performing combat practices with weapons systems or software platforms such as AI DSS employed as part of military targeting.
We identify two mechanisms employed by defence tech companies when they claim epistemic authority—‘truthful’ knowledge and expertise—over AI in the future of war. First, tech companies’ virtual demonstrations portray algorithmic warfare as a strategic imperative for Western militaries. The visual narratives and discursive contextualisation that these companies —and the experts featured in the videos—develop suggest that in the next war, quick and precise military decision-making, in which mass amounts of expendable, autonomous weapons systems are operated by command-and-control platforms that integrate machine intelligence, speed and accuracy, will be the only way to defeat adversaries.
For example, Palmer Luckey claimed in a recent op-ed that he “saw the future of warfare” when visiting Ukraine just after Russia’s full-scale invasion. He claimed to have experienced “in practice” how militaries will win the wars of the future, asserting that “success relies on the ability to apply new technologies in the largest numbers”. Via demonstrations of their AI-based products, startups present what Elke Schwarz describes as “a fantasy of omniscience and omnipotence”: the possibility to get perfect knowledge and combatting the so-called ‘fog of war’ via extensive data analysis. The integration of AI in military operations is thereby represented as a strategic necessity for the security—and ultimately even the very survival—of the US and its allies.
Second, startups’ virtual demonstrations and narratives frame the integration of AI systems as a moral imperative for the US and its allies to lead in the global AI competition against authoritarian states. This framing is thinly veiled in Luckey’s TED talk, as he states that “the ethical implications of AI in warfare are serious. But here’s the truth. If the United States doesn’t lead in this space, authoritarian regimes will. And they won’t be concerned with our ethical norms. AI enhances decision-making. It increases precision. It reduces collateral damage. Hopefully, it can eliminate some conflicts altogether”.
Not only through his words, but also through his whole performance—acting in front of the audience as an eccentric, no-nonsense arms dealer—Luckey aims to radiate prophetic credibility and untapped knowledge over the future of warfare. He not only suggests that winning the next war depends on applying AI technologies in the military in the most far-reaching way, but also presents this as the morally right and responsible thing to do. Similarly, Alexander Karp, the CEO of Palantir, claimed in a conference talk that “we are the peace activists”, referring to Palantir and other producers of military AI applications, and suggesting that they have moral authority because they are pursuing military AI for deterrence and stability.
The Implications of Misrepresenting AI in War
The virtual demonstrations of AI DSS, such as Anduril’s Lattice, do more than reproduce defence tech companies’ claims to expertise on AI in war. Importantly, they also misrepresent the complexities and realities of warfare by promoting visions of sanitised, precise and bloodless violence. These videos do not include civilians or the destruction of civilian objects, for example. Waging war through AI-enabled platforms, weapons systems, and AI DSS is portrayed as a clean and efficient practice, objectified by the datafication and mechanisation that algorithmic decision-making promises. Leaders of companies such as Palantir and Anduril are eager to point to the strategic and moral obligations of the US and its allies to innovate and integrate AI capabilities (supplied by tech companies) in military practices.
Elke Schwarz argued on these pages that those able to claim “secret knowledge about humanity’s inevitable future” wield substantial political power and influence. The increasing efforts of defence tech companies to position themselves as superiorly knowledgeable and experienced with the future conduct of war, and the apparent success they seem to have with government and military organisations—not only in the US, but also across Europe in the UK, the Netherlands, Denmark and Belgium—leads to several political, legal, ethical, and security implications.
First, these virtual military demonstrations seek to convince decisionmakers to further invest in military ‘blitzscaling’ strategies and technologies, where speed and experimentation are preferred over safety and usefulness. Evidence from the field now shows that such ‘advanced technologies’ often fall short of expectations, with critical security flaws and personnel frustration as a result. Waging war is not as clean, efficient and precise as it is sometimes made out to be. It is messy, deadly and unpredictable. More data does not automatically lead to clearer perceptions of the battlefield or a better understanding of the adversary. For instance, Karp stated that the Israeli Defense Forces (IDF) integrate Palantir’s software capabilities into their targeting practices. Yet, the mass destruction and genocidal violence in Gaza show that AI technologies have provided the IDF not with tools of precision to execute limited military operations, but rather with more potent means of surveillance and lethal violence.
Second, by supporting the narrative that data-driven military targeting is automatically more efficient, tech companies’ demonstrations have severe legal implications. As pointed out by legal experts (including in previous Opinio Juris posts), decisions related to international humanitarian law obligations, such as the principles of distinction, proportionality, and precaution, involve not only quantitative data analysis. They require the exercise of cognitive agency by humans. While the use of AI DSS in military targeting officially involves humans, there are several dynamics of human-machine interaction that may affect the way these humans exercise their cognitive agency, such as automation bias or offloading judgement to AI DSS, especially in situations of intense speed and pressure to act. An increased use of AI DSS brings a higher risk of humans such as intelligence analysts, commanders and operators not exercising a sufficient level of judgement and agency that is required by legal obligations that States committed to.
Third, such limitations to the exercise of human agency also have ethical implications in a context of warfare, which involves destruction, death, and harm to civilians and civilian objects, among others. Reported uses of AI DSS by the IDF in Gaza, especially in 2023-2024, illustrate the potential for exacerbating the systemization of violence and the lack of humanity ‘in the loop’, despite the presence of humans. Unsurprisingly, Anduril’s and Palantir’s virtual demonstrations of their AI DSS do not display or acknowledge such risks.
Finally, these companies often claim to develop AI-enabled military products in the name of defending democracy. In a recently published book, Alex Karp and Palantir employee Nicholas Zamiska suggest that a “combination of a pursuit of innovation with the objectives of the nation” will “safeguard the legitimacy of the democratic project itself”. Anduril’s objective is to “reboot the arsenal of democracy” through building the technology necessary to maintain technological primacy over the adversaries of the US and its allies. But these companies’ strategies risk severely weakening the very democratic project these companies are so adamant to defend, especially in the US context. Through upsetting rigid military acquisition and procurement processes to spur ‘innovation’, cosying up to the increasingly autocratic Trump administration, and undermining transparency and public scrutiny, the core principles of democratic government and oversight are challenged.
With President Trump’s winning the ‘AI race’ agenda, tech companies specializing in defence AI are likely to gain further influence and use it to claim epistemic authority over AI in war, including via cinematic, Hollywood-style videos promoting AI DSS. But beyond the shiny demonstrations, the perceived necessity to integrate AI everywhere involves significant political, security, legal and ethical implications, especially in armed conflict.

Leave a Reply