Symposium on Military AI and the Law of Armed Conflict: Responsible Deployments of Militarised AI – The Power of Information to Prevent Unintended Engagements

Symposium on Military AI and the Law of Armed Conflict: Responsible Deployments of Militarised AI – The Power of Information to Prevent Unintended Engagements

[Tsvetelina van Benthem is a postdoctoral research fellow at the Oxford Institute for Ethics, Law and Armed Conflict, lecturer in international law for the Oxford University Diplomatic Studies Programme and senior legal adviser at The Reckoning Project. She is a member of the core team of the Oxford Process on International Law Protections in Cyberspace, and regularly advises states and organisations on the international law regulation of artificial intelligence and information and communications technologies.]

Autonomous weapons systems (‘AWS’) make many of us worry about loss of control. These systems which, once activated, can ‘select and apply force to targets without human intervention’, seem to build a distance – temporal, geographical, causal, and even moral – between human decision-making and battlefield outcomes. A particular concern is whether, due to unpredicted (or unpredictable) activity of the weapon system, the intent of the party to conflict will fail to be adequately translated. This can, in turn, lead to unintended consequences, including harmful consequences to civilians. 

While emerging technologies, and in particular developments in the field of militarised AI, do pose these questions with particular force, the potential disconnect between party intent and battlefield outcome is not unique to these technologies. After firing a rifle, the bullet is also very much outside the control of the rifle’s operator. A person or object may cross its trajectory, or environmental conditions may deflect it. It is something else that differentiates the case of AWS from that of the rifle and the bullet. It is that, with the rifle and bullet, we have a good understanding of the factors that may affect the implementation of our intent. Air resistance, pressure and temperature, bullet shape, muzzle velocity, drag coefficient affect the accuracy of the shot. Expectations of desired outcomes are built on knowledge of relevant information on the weapon, the handler, and the environment. In contrast, due to the perceived ‘black-box’ nature of AI, there is a concern of a limited possibility to understand the factors causal to an outcome (Brazil, p. 14). Responsible deployments of AWS will necessitate the generation of robust data on expected performance, interaction with operators and the environment. The generation and careful assessment of such information can minimise the risk of unintended engagements. 

This post identifies a problem – the risk of unintended engagements, and proceeds to propose a way to address it – the continuous generation and accumulation of information to inform targeting decisions. Section one briefly outlines areas of progress in the inter-governmental discussions on AWS, in particular those occurring at the Group of Governmental Experts on Lethal AWS (‘GGE on LAWS’). It then suggests that more attention must be paid to the key concern, and sharp-end legal question: the regulation of unintended engagements under IHL. Section two identifies four obligations under IHL the application of which either depends on or generates relevant information on weapons, deployers and their interaction, targets of attack and battlefield environments. Ensuring the uninterrupted flow of information in the feedback loop of positive and negative IHL obligations can decrease the risk of unintended engagements.

1. From Legal ‘Givens’ to the Sharp-End Legal Question on Unintended Engagements 

As with any maturing legal discussion, the discussion on the application of international law to AWS has, over time, acquired its own set of ‘givens’. These ‘givens’ are premises that seem to be universally accepted by states. While such common understandings have emerged among states, the sharp-end questions under IHL – those concerned with the regulation of unintended engagements – remain only superficially examined in the inter-governmental discussions. 

The first ‘given’, and the first and necessary step in the legal analysis, is that IHL applies to AWS (GGE on LAWS, Guiding Principle (a)). The automatic application of IHL to new weapons was already affirmed by the International Court of Justice in its 1996 Nuclear Weapons Advisory Opinion (para. 86).

The second ‘given’ is that IHL imposes obligations on parties to conflict, not on weapons systems (Estonia, p. 17). It is by now clear that the right question to ask is whether parties to conflict will be able to comply with their obligations when they deploy AWS in particular battlefield environments (see here, p. 198), and not whether the weapon can comply with the law.

The third ‘given’ is that AWS must not be designed to target civilians or civilian objects (art. 1(1)(a)). This means that parties to conflict must not deploy systems developed with the purpose of targeting civilians or civilian objects. 

There is hardly any doubt that the purposeful targeting of civilians, with AWS or any other means, violates IHL. But focusing on the purposeful targeting of civilians is, in a way, tilting at windmills. The key concern raised by states, academics and civil society organisations is not that some states or non-state actors will use AWS to intentionally harm civilians. Rather, it is a concern over the legal regulation of the unintended: unintended harms, unintended biases, unintended loss of control over the weapon (Working paper submitted by Bulgaria, Denmark, France, Germany, Italy, Luxembourg and Norway, para 12). Unintended military engagements are those engagements the modality or consequences of which are not intended (that is, desired or foreseen as virtually certain) by the party to conflict. As a category, unintended engagements capture a wide array of scenarios, including, most frequently, malfunctioning weapons and the mistaken targeting of protected persons and objects.  

What the discussions on AWS and the related risk of unintended engagements expose are areas where IHL remains unclear and under-specified. This is not a ‘new’ autonomy-related worry. Rather, the lack of clarity in the law was there to begin with. In the literature, unintended harms are often treated summarily – according to Dinstein, for instance, ‘many things can go wrong in the execution of attacks, and, as a result, civilians are frequently harmed by accident’ (para. 398). Of concern is the seeming aim of the narrative of accidents, that is, to procure a ‘blanket safe harbor from liability’ (p. 139). There is a ring of Ambrose Bierce to this ‘accidental’ narrative. In The Devil’s Dictionary, he defined responsibility as ‘a detachable burden easily shifted to the shoulders of God, Fate, Fortune, Luck or one’s neighbour. In the days of astrology it was customary to unload it upon a star.’ History tells us, however, that the most widely-publicised instances of unintended engagements, such as the 1999 bombing of the Chinese Embassy in Belgrade, the 2015 attack on the Médecins Sans Frontières hospital in Kunduz, and the August 2021 drone strike that mistakenly targeted civilians in Kabul, were all the result of faulty intelligence, or flawed procedures, or cognitive biases, or a combination thereof. It was less about things going wrong, and more about parties to conflict getting things wrong.

What, then, is the legal significance of the ‘accidental’ framing? To say that something was an accident does not mean much under IHL – it carries no legal implications. An accident is not a justification under IHL. Nor is it a circumstance precluding wrongfulness under the law of state responsibility. How we think about the legality of particular targeting conduct involving AWS depends on (1) the interpretation of specific obligations under IHL and (2) their application to specific fact patterns. Whether particular conduct leading to an unintended engagement is lawful or unlawful is, therefore, contingent on how we interpret the elements of relevant obligations. As I have argued previously, some unintended engagements involving AWS will be categorised as lawful, and others as unlawful. Lack of intent does not equal lack of breach, and consequently of responsibility. To make the legal evaluation, one needs to engage in a granular analysis of specific IHL obligations.

2. IHL-Mandated Information Loops and the Prevention of Unintended Engagements

IHL contains a wide number of obligations that bear on the regulation of AWS. Despite the absence of AI-specific obligations under IHL, the law constrains parties to conflict in important ways. A system of positive and negative obligations ensures the protection of civilians from the dangers arising from military operations. Importantly, this is not a system imposing strict liability on parties to conflict for the causation of civilian harm. It is a system geared towards the decision-making of parties. The decisions reached are not assessed in hindsight, but in light of the circumstances prevailing at the time, including ‘on the basis of […] assessment of the information from all sources which is reasonably available to [the decision-makers] at the relevant time’ (British declaration, p. 14). 

This information is unlikely to be a perfect mirror of reality. Battlefields can be complex, dynamic, and continuously evolving. People move from one location to another. Behaviours change. Adversaries seek to deceive. Sources lie, or misremember. Forming an accurate representation of reality in times of armed conflict is notoriously difficult. The introduction of AWS brings additional complications. How AWS interact with their users and environment injects an additional risk of unpredictability, which may be particularly hard to grasp, unpack and manage. In a December 2023 report, the UK House of Lords AI in Weapon Systems Committee dedicated an entire section to the risks of unreliability and unpredictability of AWS (paras. 57 – 67).  

And yet, despite all difficulties, parties to conflict are bound under IHL to gather information to build, to the best of their abilities, a causal universe of facts and interactions. In this causal universe, the reliability of weapons systems, their expected margins of error, warning signs of malfunction, modalities of human machine interaction will be key pieces of the information puzzle that will allow parties to conflict to make informed decisions on the deployment of AWS.

IHL places particular demands on parties to conflict to gather information relevant to targeting. Positive obligations, that is, obligations that require parties to conflict to do something, to take certain steps, have an important information-generating function. 

For instance, the obligation to carry out legal reviews of new weapons (AP I, art. 36) requires a process which examines the expected performance of weapons against applicable obligations under international law. According to Bulgaria’s input to the GGE on LAWS, legal reviews ‘should examine weapons systems against biases’, and ‘[a]ny potential alteration attributable to infield/machine learning and/or self-learning could require conducting an additional/new legal review procedure in order to guarantee IHL compliance’ (p. 16). 

Another positive obligation that plays a significant information-generating role is the precautionary obligation to verify targets of attack (art. 57(2)(a)(i) AP I; ICRC, Customary IHL, Rule 16). To comply with this obligation, those who plan or decide upon an attack must do everything feasible to determine the status of the persons and objects they plan to target. The goal is to ensure that only lawful military objectives will be made the object of attack. What is feasible will depend on the particular circumstances, including the information-gathering means available to the attacker. In discharging this obligation, the attacking party seeks to construct a picture of the battlefield and gathers information relevant to determinations of status (civilian/military). 

Compliance with positive obligations will ideally supply the party to conflict with a pool of information on the performance of a particular AWS, its potential risks, the modalities of its interaction with human operators, the battlefield environment and the identity, behaviour or characteristics of persons and objects of interest. Based on this information, the party will either be able to lawfully engage a target or be required to abstain from attack. The circumstances where parties are required to abstain from attack are regulated by negative obligations. 

Two particularly relevant negative obligations are the prohibition of making civilians the object of attack (art. 51(2) AP I; Customary IHL, Rule 1) and the prohibition of indiscriminate attacks (art. 51(4) AP I; ICRC Customary IHL, Rule 12). Starting with the prohibition of attacking civilians, I have argued in previous work that it covers not only attacks carried out with intention vis-à-vis the civilian status of the person(s), but also those where the attacker was reckless (narrow interpretation) or negligent (broad interpretation) towards that status. Under the narrow interpretation, an attacker who has doubt on the status of a person because of, for example, conflicting intelligence or the unreliability of their source, and yet proceeds with the attack, will violate the prohibition – they act recklessly, conscious of the risk that the person is in fact civilian. Under the broad interpretation, the question is not whether the attacker subjectively experienced doubt, but whether they should have experienced it, given the information available at the time. In both the recklessness and negligence scenarios, the key question will turn on what information was available at the time of deciding on the attack. 

Consider a situation similar to the 2022 Russian bombing of the Mariupol Theatre in Ukraine. At the front and back entrances of the Theatre, Ukrainians had written ‚ДЕТИ‘ (children), to dissuade attackers from targeting the building. Suppose the attack was carried out by an AWS. The relevant questions would be about (1) sensors – is the sign detected?; (2) assessment of data – what can be inferred from the sign? Could it be a ruse?; (3) standard of confidence for attack – what is a tolerable threshold for risk of error?; (4) additional measures – if the standard of confidence is not met, can additional information be generated? Ultimately, whether the party to conflict attacked a civilian object will depend on the standards programmed in the AWS to construct objects of attack.

And finally, under the prohibition of indiscriminate attacks, it is both prohibited to launch attacks with weapons that are inherently indiscriminate and to launch attacks with otherwise lawful weapons that, in a particular battlefield environment, cannot be directed at a specific military objective. The prohibition does not conclusively establish whether the test is an objective one (that is, breached by the fact of deploying a weapon objectively characterised as indiscriminate (per se or in a given environment)) or a subjective one (that is, taking into account the decision-makers’ cognition of the indiscriminate nature of the weapon). While it may be natural to expect experienced military commanders to understand the effects of modified air bombs, or Orkan rockets (para. 30), it may be more difficult to establish such expectations towards commanders and operators deploying AWS. That said, if the process of legal reviews succeeds in generating an actionable pool of information on permissible and impermissible deployment contexts, risks and warning signs, then it will be easier to build expectations for commanders and other decision-makers. Developing understandable systems will be particularly important. As suggested in the Ethical Principles of AI in Defence of the UK Ministry of Defence, ‘AI-enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design’ (Principle 3).

What this overview shows is that the positive and negative obligations under IHL operate as a system, and that this system is self-reinforcing. Positive obligations generate information against which negative obligations are assessed; and the experience of attacking targets with AWS feeds into the overall repository of knowledge, potentially requiring the taking of additional positive obligations. Ensuring this continuous information loop may be one of the most robust safeguards against unintended engagements, and thereby against the risk of harming civilians in the use of AWS.

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Autonomous Weapons, Featured, General, Public International Law, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.