29 Nov Do AI Decision Support Systems ‘Support’ Humans in Military Decision-Making on the Use of Force?
[Dr Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms and AutoPractices projects based at the Center for War Studies, University of Southern Denmark]
The United Nations Summit of the Future, held in New York in September 2024, resulted in the adoption of the Pact for the Future. Among many other issues, this pact lists as an action to “advance with urgency” debates on lethal autonomous weapon systems within the framework of the respective UN Group of Governmental Experts (GGE). Discussions on autonomous weapon systems (AWS)—defined as weapon systems which, upon activation, select and engage targets without human intervention—have been ongoing at the UN for more than ten years.
In the last few years, the GGE’s progress in terms of agreeing on possible new instruments or measures to address the challenges surrounding weapon systems integrating AI and autonomous technologies has not been substantial. Moreover, the dominant focus on AWS has overshadowed multiple other uses of AI technologies in the military domain.
Armed forces around the world are not only developing AI technologies as part of weapon systems such as drones or loitering munitions, but also integrating them into targeting decision-making processes as part of AI-based decision support systems (AI DSS), which can be used to recognize patterns in substantial amounts of data, predict scenarios, or recommend possible courses of action to military commanders. The complex and multi-dimensional process of military targeting can integrate AI systems at several stages which might be directly or indirectly informing the use of force.
Reports about uses of AI DSS are coming from war zones around the globe, such as the latest Israel-Hamas war (2023-) and the Russia-Ukraine war (2022-). As a recently published report by the AutoNorms project at the University of Southern Denmark highlights, these developments raise questions about the roles of humans and machines in decision-making on the use of force and require further consideration.
The employment of AI DSS is not a formal topic at the UN debate on AWS because in theory, it involves humans making targeting decisions and AI-based systems ‘supporting’ humans in the complex and multi-layered decision-making process on the use of force.
However, simply having the presence of a human does not guarantee a high level of human involvement and context-appropriate judgement in targeting decisions. Dynamics of human-machine interaction—the roles of humans and AI DSS in decision-making on the use of force—raise legal, ethical, humanitarian, and security concerns which lead to the question: do AI systems ‘support’ human decision-making in a positive manner?
Uses of AI in Military Decision-making
The topic of AI in military decision-making is not new. The United States Department of Defense has a long-standing interest in making intelligence analysis informing targeting decisions more efficient with the help of automated and AI technologies. One prominent illustration of this trend is the “Algorithmic Warfare Cross-Functional Team”, also called Project Maven. Launched in 2017, Project Maven aimed at analyzing substantial volumes of video footage collected by US drones via machine learning algorithms. Currently run by the National Geospatial-Intelligence Agency (NGA), it now integrates various types of data presented in the Maven Smart System interface, which then highlights potential targets based on Maven’s data analysis and extrapolation.
The Ukrainian Armed Forces are employing several AI DSS in their battlefield decision-making to defend Ukraine against Russia’s illegal full-scale invasion. Some domestically developed systems, such as Kropyva or GIS Arta, have been nicknamed ‘artillery Uber’ because they integrate data from radars, drone footage and other sources to compute and share information about Russian forces’ positions with Ukrainian artillery units in real time. Other types of AI-based decision-making software are supplied by foreign companies such as Palantir Technologies, which, according to its CEO Alex Karp, is “responsible for most of the targeting in Ukraine”.
While the Israeli Defense Forces (IDF) have employed AI decision systems before Hamas’ attacks on October 7th, 2023, the publication of investigative reports on the IDF’s uses of several AI-based systems in Gaza has drawn considerable media and scholarly attention. The uses of AI DSS such as Gospel and Lavender—often intertwined in a complex network of systems and sensors—have allowed the IDF to generate an unprecedented number of targets at unprecedented speed. While Gospel and Lavender might have been intended as repositories or databases for intelligence analysts, in practice they appear to have been used as target ‘validation tools’ as part of Israel’s military operations in Gaza. These trends raise concerns over the role of humans in verifying and vetting targets in an already escalating humanitarian crisis.
The development of AI DSS is an apparently worldwide and long-lasting trend, as militaries (in partnerships with some private companies) are planning to integrate AI to process considerable amounts of data collected via surveillance, satellite imagery, and other sources at greater speed and scale. NGA Director Frank Whitworth said at a conference organised by Palantir that the Maven Smart System is a “tool for decision-making, not a decision-maker”. This ‘support’ of humans in decision-making is associated with efficiency and gaining strategic advantage over adversaries in some battlefield contexts. However, the perceived ‘need for speed’ in military decision-making should be examined alongside various concerns surrounding how humans interact with AI DSS.
Exercising Agency in Human-machine Interactions
Both humans and AI-based systems have their own biases and assumptions in decision-making. It is important to look at how these interact together as part of a socio-technical system, rather than at a dichotomy between the human and “the machine [that] did it coldly”. This includes considering the variety of cognitive, data, and system biases involved; issues of trust, whether over-trust (automation bias), or under-trust (algorithmic aversion); and institutional and political contexts surrounding the development and use of AI DSS, such as targeting doctrines and respective rules of engagement.
All these aspects risk affecting how humans exercise agency, or the capacity to understand and reasonably foresee the AI system’s effects within the context, make decisions, and act upon these decisions in a way where responsibility and accountability are ensured.
Exercising human agency in decisions on the use of force matters for compliance with international humanitarian law, which requires the attribution of conduct in warfare to humans. It is also needed to maintain humanity—not only humans—in the complex targeting decision-making process, especially in contexts of urban warfare with a high risk of affecting civilians. As Charli Carpenter writes, the concern is not about AI technologies replacing humans in decision-making, but about making “human decision-making too robotic, essentially transforming human operators themselves into ‘killer robots’”.
While AI DSS developed around the world are not inherently unlawful, the ways in which they have reportedly been used suggest that humans risk not having the opportunity to exercise the necessary level of agency. For instance, accounts of the IDF using AI DSS in ways that prioritize the quantity of targets or where humans appear to ‘rubber stamp’ targets within seconds suggest that in many contexts of use, human decision-making appears to be not positively ‘supported’ by AI systems.
The debate at the UN remains focused on AWS, but sustaining and strengthening the human role is a concern across military applications of AI, including in cases where humans are formally taking the decision. Humans involved in the use of AI DSS, including analysts and operators, require the necessary time and space to deliberate on an AI system’s effects in a specific context. Practical ways of ensuring this exercise of human agency in human-machine interaction should therefore be at the center of the global discussion on ‘responsible’ uses of AI in the military domain.
Leave a Reply