13 Apr From Infinite Scroll to Command and Control: What Big Tech’s Courtroom Reckoning Means for Military AI Governance
[Alexander Blanchard is Senior Researcher in the Governance of AI Programme at the Stockholm International Peace Research Institute (SIPRI), Sweden]
In recent weeks, there has been a good deal of commentary about military applications of artificial intelligence (AI), prompted by the US military’s public spat with the AI company Anthropic and the use of AI in its war on Iran. But another set of headlines also merits attention for those concerned with the global governance of military AI. Last month, juries in US courts found two of the most powerful tech companies – Meta and Google – legally responsible for the harms caused by their platforms to young people. These cases offer rare insight into the design choices taken by companies providing software products for military use, and with it important lessons about the role of the tech industry in realising aspirations for responsible governance of AI in the military domain. This includes lessons about the practical challenges of accountability, as well as the need for governance debate to recognise that problematic human-machine interactions may be deliberately engineered.
The rise of the military-tech complex and the role of platform companies
Many states that see the military adoption of AI as a strategic priority are unable to develop many AI capabilities in-house due to a lack of capital and expertise. Increasingly, these states are turning to technology firms to provide data services and expertise. This is leading to the emergence of the so-called military-tech complex – a series of close partnerships between armed forces, governments, and technology firms to integrate AI and data analytics into military operations.
The rise of the military-tech complex has seen the defence industrial landscape transform significantly over the last few years, with new commercial relationships being formed and many new defence AI startups appearing on the scene. But a major centre for defence industrial work remains Silicon Valley, which has long been wooed by the defence establishment. This includes those large, globally dominant platform companies like Microsoft, Alphabet (Google), Amazon, and Meta (Facebook) often referred to as ‘big tech’. Google, for instance, was early on associated with Project Maven, the US army’s flagship AI-enabled targeting support system, providing tools to support the integration of machine learning processes into the army’s organisational practices; Meta made its Llama AI models available for defence applications in 2024, and recently announced a partnership with the defence neo-prime Anduril to provide virtual and augmented reality devices to US armed forces.
What makes these platform companies significant is not just their long-term involvement in providing products and services to armed forces, but their infrastructural power: often they own and operate the foundational layers of hardware and cloud infrastructure that support cutting-edge applications of AI. This was recently illustrated by the Israeli military’s use of Microsoft’s cloud infrastructure to support its mass surveillance program of Palestinians before, following journalistic scrutiny, the program was moved to Amazon’s cloud infrastructure.
In military AI governance debate there is uniform recognition that design choices made throughout the lifecycle of AI development and implementation impacts the ability of armed forces to use it in line with relevant legal and ethical frameworks. Recognising this, states have been keen to involve industry in governance efforts on these technologies. Given the infrastructural power of platform companies, and given the potentially diffuse use of their technologies in military settings, it will have a significant influence on whether the aspirations and obligations entailed by those frameworks are realised.
A big tobacco moment for big tech: from content to design
In the final week of March, we were granted a glimpse at how two of these companies settle their design choices. In the span of just two days, juries in two separate trials in the US found Meta and Google legally responsible for the harms caused by their platforms to young people. In a case heard in Los Angeles (LA), the plaintiff argued that social media sites Instagram (Meta) and YouTube (Google/Alphabet) had been intentionally designed with addictive features to get users hooked on them. The LA verdict came a day after a jury in New Mexico found Meta liable for the way in which its platforms endangered children and exposed them to sexually explicit material and contact with sexual predators.
The judgment in the LA case is something of a landmark because it represents a significant shift in how courts assess the responsibility of platform companies. Traditionally, it has been difficult to prosecute online platforms in the US for harmful content because of the protections provided by Section 230 of the Communications Decency Act, which shields them from liability for user-generated content by treating them as neutral intermediaries. Signed into law in 1996, there have been calls to repeal the act, with critics arguing it is poorly suited to an internet era dominated by big data and algorithmically-generated content.
However, by focusing on the design features rather than the content of Instagram and YouTube, the plaintiff in the trial in LA were able to side-step the protections offered by Section 230. The argument the plaintiff’s lawyers made was that Instagram and YouTube are defective products. Features such as infinite scroll, algorithmic recommender systems, vanishing, time-sensitive content, and autoplay were highlighted as deliberate mechanisms with addictive functions to keep users on these platforms for extended periods. It was alleged that the companies borrowed heavily from the behavioural and neurobiological techniques used by poker machines to get young people hooked and to drive advertising revenues. Meta’s internal communications compared the platform’s effects to pushing drugs and gambling, whilst an internal memo written by YouTube staff reportedly described “viewer addiction” as the goal. The jury accepted that these design choices encourage compulsive behaviour. The jury also upheld the claim of negligence: these companies knew that their products were harmful yet failed to warn users or mitigate those risks. Google and Meta have said they plan to appeal.
These verdicts are the first of their kind and could mark a ‘big tobacco moment’ for big tech (the period in the 1990s when public opinion turned against tobacco products), with thousands more similar cases waiting to go to trial. They are part of a broader shift in public opinion about the role of technology companies in our lives, and they challenge a long-standing narrative that big tech is too big to regulate. As one of the lawyers for the plaintiff in the Los Angeles case put it: “accountability has arrived.”
Lessons for military AI governance: accountability and engineered dependency
Seemingly so far from the world of drones, bombs, and tanks, what can these two verdicts tell us about the significance of platform companies for the global governance military AI? Two lessons particularly stand out.
The first is obvious but no less important for being so. It concerns the practical challenges of accountability. Accountability has emerged as a key principle across national and international governance initiatives, underscoring the importance of delineating clear lines of responsibility for the use of military AI systems. Central to achieving accountability is understanding how different decision-makers contribute to the development, implementation, and use of particular systems. This is because accountability is foremost a relation of answerability involving an obligation to inform about and justify one’s conduct to an appropriate authority. Such a relation presupposes, amongst other things, a condition of interrogation: one actor must be exposed to the scrutiny of another because accountability only to oneself is no accountability at all. This condition requires many things, including drawing on documentation about the development and procurement of AI systems and the actors involved.
These trials are not the first time evidence has come to light of Meta and Google avoiding justifying dubious activities by concealing them, and it’s unlikely to be the last. In 2024, The New York Times claimed that Google had spent 15 years creating a culture of concealment. Any accountability regime worth the name that results from military AI governance efforts ought to be based on a realistic understanding of the motivations of commercial actors and how a sufficient degree of scrutiny can be achieved. This is particularly important since concealing knowledge of harmful practices in the military domain could have potentially severe consequences. This is not the place to discuss the character that such a regime should have, only to note that what has so far stymied efforts in that direction is a general belief in broader governance debate that these companies are neutral intermediaries and the digital technologies they provide are mere tools for channelling a state’s intent. Indeed, if there is a silver lining between the two court cases discussed above, and the recent public spat between Anthropic and the US Department of War, it is the beginning of the end for this belief.
The second lesson is to do with the way debate on the governance of military AI tends to psychologise the sort of issues that came up in the LA trial. In seeking to understand and explain the apparent dependency of humans on digital systems when interfacing with them, scholars and policymakers often reach for concepts like automation bias, cognitive offloading, and over-trust. Doubtless, this captures something about the very human attempt to apprehend a technology that, in the case of AI, has something more than a tool-like quality, including by falling back on modes of thought habituated through person-to-person interaction. But chalking up the shortcomings of human-machine interaction to the frailties of human cognition glosses over the role of the technology’s creator(s), including the fact that this dependency is, evidently, sometimes intentionally engineered. What the LA verdict underscores therefore is the need for a more nuanced account of the role of commercial product providers when it comes to the challenges of human-machine interaction in military settings.
Frictionless by design: the business of engagement
But how much can we extrapolate from two court cases concerned with two specific products, provided by just two companies that themselves provide a huge variety of different services and products?
What is noteworthy is how much of the commentary around these trials identifies the addictive properties of platform products as structural characteristics. If the issue of algorithmic bias has taught us anything, it is that digital technologies are social artifacts. They are the products of human minds and human hands, and it is difficult, if not impossible, to dissociate them from the ambitions of their creators. A longstanding ambition of platform companies is to maximise user engagement: it is intrinsic to a business model that, at its core, is about capturing and monetising user attention. It has a cultural corollary in Silicon Valley’s preoccupation with the idea of friction. As Anna Wiener discusses in her memoir about her years working for a tech startup, ‘friction’ was a term the tech industry used for anything that impeded a user’s adoption or use of a product. Originally a design principle for making products easier to use, it morphed into something of a philosophy of life:
“The endgame was the same for everyone: Growth at any cost. Scale above all. […] A world of actionable metrics, in which developers would never stop optimizing and users would never stop looking at their screens. A world freed of decision-making, the unnecessary friction of human behaviour, where everything – whittled down to the fastest, simplest, sleekest version of itself – could be optimized, prioritized, monetized, and controlled.”
‘Optimisation’ and ‘prioritization’, words long associated with internet search engines and ad management, will be recognisable to those who take even a cursory interest in the use of algorithmic techniques in military targeting. Of course, much depends on how a product is provided, including how it is configured once supplied to a military organisation. But organizational theory tells us that, once a business alights on a successful set of practices, these practices become highly entrenched. There is good reason to think that not only the language of big tech, but also its approach to product design – one that led to last month’s verdict in LA – has carried over into the military setting.
In many ways, the two lessons described above are related. A condition of accountability is the capacity of governance frameworks to discern the traces of human decision-making in military AI systems. That requires a vocabulary that can speak to more than just the propensities of humans when interacting with these machines. The ongoing integration of AI into military targeting practices means these systems will increasingly shape how commanders see and interpret the battlefield, bringing a whole range of risks. When the conditions of their design remain opaque, and the aims of their developers go unexamined, accountability is not secured but displaced. It is imperative that governance debate moves beyond a focus on human operators to engage more directly with the socio-technical conditions of system design.
Photo attribution: “Responsible AI in the Military Domain – REAIM 2023” by Ministerie van Buitenlandse Zaken is licensed under CC BY-SA 2.0

Leave a Reply