Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework

Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework

[Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark.  She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025) and also serves as the co-chair of the IEEE-SA Research Group on AI and Autonomy in Defence Systems.

Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms project based at the Center for War Studies, University of Southern Denmark.]

Artificial intelligence (AI) technologies are increasingly part of military processes. Militaries use AI technologies, for example, for decision support and in combat operations, including as part of weapon systems. Contrary to some previous expectations, especially notable popular culture depictions of ‘sentient’ humanoid machines willing to destroy humanity or ‘robot wars’ between machines, integrating AI into the military does not mean that AI technologies replace humans. Rather, military personnel interact with AI technologies, and likely at an increasing frequency, as part of their day-to-day activities, which include the targeting process. Some militaries have adapted the language of human-machine teaming to describe these instances of human-machine interaction. This term can refer to humans interacting with both uncrewed, (semi-)autonomous platforms, or AI-based software systems. Such developments are increasingly promoted as key trends in defence innovation. For instance, the UK Ministry of Defence considers the “effective integration of humans, AI and robotics into warfighting systems—human-machine teams” to be “at the core of future military advantage”.

At the same time, many states highlight that they intend to develop and use these technologies in a ‘responsible’ manner. The framework of Responsible AI in the military domain is growing in importance across policy and expert discourse, moving beyond the focus on autonomous weapon systems that can “select and apply force without human intervention”. Instead, this framework assumes that AI will be integrated into various military processes and interact with humans in different ways, and therefore it is imperative to find ways of doing so responsibly, for instance by ensuring understandability, reliability, and accountability. 

Our contribution connects these intersecting trends in offering a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain. To do so, we proceed in two steps: first, we sketch the kind of challenges raised by instances of human-machine interaction in a military context. We argue that human-machine interaction may fundamentally change the quality of human agency, understood as the ability to make choices and act, in warfare. It does so by introducing a form of distributed agency in military decision-making, including in but not limited to the targeting process. Therefore, there is a need to examine the types of distributed agency that will emerge, or have already emerged, as computational techniques under the ‘AI’ umbrella term are increasingly integrated into military processes. Second, we consider the extent to which the emerging Responsible AI framework, as well as principles associated with it, demonstrates potential to address these challenges.

1. Human-machine Interaction and Distributed Agency

Appropriate forms of human agency and control over use-of-force decision-making are necessary on ethical, legal, and security grounds. (Western) military thinking on human-machine or human-AI teaming recognises this. Human-machine interaction involves sharing cognitive tasks with AI technologies as their use is chiefly associated with the speedy processing of large amounts of data/information. It follows that any decision made in the context of human-machine interaction implies a combination of ‘human’ and ‘machine’ decision-making. This interplay changes how human agency is exercised. Instead of producing zero-sum outcomes, we are likely to encounter a form of distributed agency in military decisions that rely on human-machine interaction. Above all, distributed agency involves a blurring of the distinction between instances of ‘human’ and ‘AI’ agency.

Understanding this distributed agency could, in the first place, consider particularities of how ‘human’ and ‘AI’ agents make choices and act and what this means for interaction dynamics. This is an evolving topic of interest as AI technologies are increasingly integrated into the military domain. The reality of distributed agency is not clear-cut. Any ‘AI agency’ results from human activity throughout the algorithmic design and training process that has become ‘invisible’ at the point of use. This human activity includes programmers who create the basic algorithmic parameters, workers who prepare the data that training machine learning algorithms requires through a series of iterative micro-tasks often subsumed as ‘labelling data’, but also the people whose data is used to train such algorithms. It is therefore important to think about ‘human’ and ‘AI’ agency as part of a relational, complex, socio-technical system. From the perspective of the many groups of humans that are part of this system, interacting with AI creates both affordances or action potentials and constraints. Studying different configurations of this complex system could then advance our understanding of distributed agency.

These initial insights into how technological affordances and constraints shape distributed agency matter in the military domain because they affect human decision-making, including in a warfare context. What does it actually mean for humans to work with AI technologies? The long-established literature in human-factor analysis describes numerous fundamental obstacles that people face when interacting with complex systems integrating automated and AI technologies. These include “poor understanding of what the systems are doing, high workload when trying to interact with AI systems, poor situation awareness (SA) and performance deficits when intervention is needed, biases in decision making based on system outputs, and degradation”. Such common operational challenges of human-machine interaction raise fundamental political, ethical, legal, social, and security concerns. There are particularly high stakes in the military domain because AI technologies used in this context have the potential to inflict severe harm, such as physical injury, human rights violations, death, and (large-scale) destruction. 

2. Responsible AI and Challenges of Human-machine Interaction

The Responsible AI framework has been gaining prominence among policymaking and expert circles of different states, especially the US and its allies. In 2023, the US released its Political Declaration on Responsible Military Use of AI and Autonomy, endorsed by 50 other states as of January 2024. US Deputy Secretary of Defense Kathleen Hicks stated that the new Replicator Initiative, aimed at producing large numbers of all-domain, attritable autonomous systems, will be carried out “while remaining steadfast to [the DoD’s] responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade”. At the same time, the concept of responsible military AI use has also been entrenched by the Responsible AI in the Military Domain (REAIM) Summit co-hosted by the Netherlands and the Republic of Korea. More than 55 states supported the Summit’s Call to Action in February 2023, and a second edition of the event is expected in Seoul in 2024. 

The Responsible AI framework broadens the debate beyond lethal autonomous weapon systems (LAWS), which have been the focus of discussions at the UN CCW in Geneva throughout the last decade. The effort to consider different uses of AI in the military, including in decision support, is a step towards recognising the challenges of human-machine interaction and potential new forms of distributed agency. These changes are happening in various ways and do not necessarily revolve around ‘full’ autonomy, weapon systems, or humans ‘out of the loop’. Efforts to consider military systems integrating autonomous and AI technologies as part of lifecycle frameworks underline this. Such frameworks demonstrate that situations of human-machine interaction need to be addressed and occur at various lifecycle stages from research & development, procurement & acquisition, TEVV, potential deployment, to retirement. Addressing such concerns therefore deserve the type of debate offered by the REAIM platform: a multi-stakeholder discussion representing global perspectives on (changing) human-machine interaction in the military. 

At the same time, the Responsible AI framework is nebulous and imprecise in its guidance on ensuring that challenges of human-machine interaction are addressed. So far, it functions as a “floating signifier”, in the sense that the concept can be understood in different ways, often depending on the interests of those who interpret it. This was already visible during the first REAIM Summit in The Hague, where most participants agreed on the importance of being responsible, but not on how to get there. Some of the common themes among the REAIM and US initiatives include commitment to international law, accountability, and responsibility, ensuring global security and stability, human oversight over military AI capabilities, as well as appropriate training of personnel involved in interacting with the capabilities. But beyond these broad principles, it remains unclear what constitutes ‘appropriate’ forms of human-machine interaction, and the forms of agency these involve, in relation to acting responsibly and in conformity with international law – that, in itself, offers unclear guidance. It must be noted, however, that defining ‘Responsible AI’ is no easy task because it requires considering the various dimensions of a complex socio-technical system which includes not only the technical aspects but also political, legal, and social ones. It has already been a challenging exercise in the civilian domain to pinpoint the exact characteristics of this concept, although key terms such as explainability, transparency, privacy, and security are often mentioned in Responsible AI strategies.

Importantly, the Responsible AI framework allows for various interpretations of the form, or mechanism, of global governance needed to address the challenges of human-machine interaction in the military. There are divergent approaches on the appropriate direction to take. For instance, US policymakers seek to “codify norms” for the responsible use of AI through the US political declaration, a form of soft law, interpreted by some experts as a way for Washington to promote its vision in its perceived strategic competition with Beijing. Meanwhile, many states favour a global legal and normative framework in the form of hard law, such as a legally binding instrument establishing appropriate forms of human-machine interaction, especially in relation to targeting, including the use of force. The UN’s 2023 New Agenda for Peace urges states not only to develop national strategies on responsible military use of AI, but also to “develop norms, rules and principles…through a multilateral process” which would involve engagement with industry, academia, and civil society. Some states are trying to make steps into this direction, for instance Austria took the initiative in the form of co-sponsoring a UN General Assembly First Committee resolution on LAWS, which was adopted with overwhelming support in November 2023. Overall, the Responsible AI framework’s inherent ambiguity is an opportunity for those favouring a soft law approach, especially actors who promote political declarations or sets of guidelines and argue that these are enough. Broad Responsible AI guidelines might symbolise a certain commitment or obligations, but at this stage they are insufficient to address already existing challenges to human-machine interaction in a security and military context –not least because they may not be connected to a concrete pathway toward operationalisation and implementation. 

Note: This essay outlines initial thinking that forms the basis of a new research project called “Human-Machine Interaction: The Distributed Agency of Humans and Machines in Military AI” (HuMach) funded by the Independent Research Fund Denmark. Led by Ingvild Bode, the project will start later in 2024.

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Featured, General, Symposia, Themes
No Comments

Sorry, the comment form is closed at this time.