New Technologies Symposium: Autonomous Weapons Systems– Why Keeping a ‘Human On the Loop’ Is Not Enough

New Technologies Symposium: Autonomous Weapons Systems– Why Keeping a ‘Human On the Loop’ Is Not Enough

[Alejandro Chehtman is Professor of Law at Universidad Torcuato Di Tella. This post is part of our New Technologies and the Law in War and Peace Symposium.]

In New Technologies and the Law in War and Peace (CUP, 2018)Bill Boothby and his colleagues have written an important collection of essays exploring the regulation of new weapons systems under both the ‘laws of wars and peace’. The book concentrates on a number of pressing issues, including cyber capabilities, autonomous weapons systems, military human enhancement, non-lethal weapons (which they call human degradation technologies), nanomaterials, and biometrics, among others, featuring fantastic scholars. It also provides a useful introductory section outlining the legal frameworks that regulate these new technologies, both inside and outside armed conflict. Needless to say, this book embarks in a phenomenally ambitious enterprise, which the authors largely honour. The book is informative, tightly argued, and highly topical. It is a wonderful resource for researchers and advanced students. Most likely, it should be required reading in any IHL course in the world.

I am grateful to the editors of Opinio Juris for the opportunity to comment on some of the book’s findings. I will concentrate on a particular aspect of Boothby’s insightful treatment of autonomous weapons systems, provided for the most part in Chapter 6. Against much of the current literature, he does not support a strict ban on this technology. A more rational approach, he argues, would be to carefully assess the potential that, if properly developed, it can offer in terms of reducing harm during armed conflict. In effect, Boothby draws useful comparisons between autonomous weapons systems and the use of equivalent technologies in non-military contexts, particularly autonomous planes in civil aviation, automated and autonomous medical surgery, and driverless cars. In all of these domains, the development of autonomous systems is usually considered not only acceptable, but also significant progress over the status quo. Analogously, given their greater speed in decision-making, their capacity to analyse vast amounts of data, and to perform improved calculations, autonomous weapons systems would in theory be capable of reducing harm for a belligerent’s own forces, as well as improve a belligerent’s chance of winning the war. Arguably, they would also lead to the reduction of incidental harm to non-combatants. After all, their targeting decisions would not ‘be distorted by fear, anger, vengeance, amnesia, panic tiredness or other peculiarly human fallibilities’. (160) Ultimately, Boothby argues that, subject to certain conditions, most notably keeping a human ‘on the loop’, these technologies could likely be considered lawful and indeed acceptable, albeit subject to close scrutiny and rigorous regulation.

In this short reaction I will argue that there are important challenges to this particular safeguard that require further consideration before the use of these weapons should be considered permissible, both under law and morality. But before substantiating this claim, let me first clarify further which types of technologies are under analysis here. Boothby distinguishes between ‘highly automated’ and ‘autonomous’ weapons systems. While the former are constrained ‘by algorithms that determine [their] responses by imposing rules of engagement and setting mission parameters’, (142) the latter identify and decide to engage targets of their own choosing by applying ‘artificial reasoning as part of the decision-making process.’ (143) This reasoning, Boothby adds, will be applied to make two types of decisions. The first is about whether the object or person of attack is a lawful target; the second concerns whether it would be lawful and appropriate to engage that target. (144) The critical feature that is relevant for us here, is that a particular platform has the capacity to select and engage a target through artificial intelligence (AI) reasoning, that is, without direct human intervention.

Boothby readily accepts that at the current stage of technological progress, it is unlikely that any such platform would itself satisfy the customary principles of distinction and discrimination, as well as the rules on proportionality and precautions in attack. However, he recognizes that the more interesting question is whether they would be lawful if the decision-making procedure could satisfy these IHL requirements. In effect, this type of capability may be attainable in the not very distant future. Critically, Boothby argues, the use of this type of weapon would ‘be potentially lawful if … a person will be placed “on the loop”’. (145) By this, he refers to the fact that a human operator will have the capability to intervene and override an attack decision that the highly automated or autonomous platform has made. Furthermore, this ‘human on the loop’ would satisfy the political requirement of ‘meaningful human control’ advocated, inter alia, in a 2016 CCW meeting of experts even for offensive attack operations. (T. Marauhn, ‘Meaningful Human Control– and the Politics of International Law’, in H. von Heinegg, et al. (eds.), Dehumanization of Warfare (2018), at 217 (cited in Boothby, at 161). Admittedly, Boothby acknowledges that having a human on the loop is not the only method to satisfy this requirement. Alternatives include ‘decisions made in advance of the mission, coupled with technical limitations imposed on the kinds of lesson that the technology is permitted to learn’. (161) Notably, insofar these measures would curtail some of the specific advantages that AI provides in this type of scenario, I think it is unlikely that they are accepted as plausible constraints to the use of autonomous weapons systems during armed conflict.) It is on this particular aspect of the analysis that I shall concentrate.

Interestingly, the reason why Boothby defends this requirement is not, as Christophe Haynes and others have argued, that a machine deciding whether an individual should be killed is inherently wrong, or that it would violate in some way her dignity. Nor is it to do with the (also extended) claim that if an autonomous weapon targets an individual in violation of IHL rules, there will be no accountability for such violation. As Boothby argues, there will be individuals responsible for developing, acquiring, testing, and managing the operation of these machines, in a similar way that there are officials responsible for the functioning of existing weapons systems that are sophisticated and often include automated features. (152) By contrast, he believes that the human on the loop is crucial to prevent (clear) mistakes by the algorithm: ‘it may be clear that the attack is headed towards a civilian taking no part in the hostilities, or a civilian object, or because the attack will cause disproportionate harm to civilians or civilian objects, or if the person or object have special protection under the LOAC (such as medical personnel, or cultural property), or the attack would be contrary to the commander’s intent.’ (140)

Boothby compares the situation of this operator with that of a surgeon overseeing a highly automated or autonomous surgical system. Much like the human on the loop in an autonomous weapon system, a surgeon needs to make evaluative assessments which do not amount to mere ‘arithmetic’ calculations. Similar to the surgeon, the human on the loop in autonomous weapons systems ought to have ‘sufficient visibility of what the machine is deciding’, ‘sufficient time to intervene’, and ‘sufficient skill and expertise’. (139 and 178, respectively).

I have significant concerns with the proposition that having a human on the loop in this type of context provides a viable (sufficient) safeguard. By contrast, I believe more research is needed before we can share Boothby’s conclusion. A first serious problem is AI’s lack of transparency. Algorithms are not public, and usually constitute highly protected secrets. They are also capable of taking into consideration an enormous amount of data, and process it in ways that are entirely beyond the reach of a human brain. In fact, AI is moving away from algorithms and towards neural networks, which are deep learning algorithms which cannot be transparent by their very nature. It is thereby often impossible even for users of these system to understand what is the basis of a particular decision.

Second, cognitive biases and institutional incentives would make human intervention inefficacious. For one, humans on the loop would be affected by so-called automation bias. That is, the fact that human beings tend to rely on results generated by automated systems over their own opinions. This is because humans ‘have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct’ and can be exacerbated in time critical domains, such as armed conflict. This is precisely what happened to the attack with U.S. Army Patriot Missiles against a British and an American plane during the war in Iraq in 2004. Furthermore, as Bostrom and Yudkowsky have put it, even ‘if an AI system is designed with a user override, one must consider the career incentive of a bureaucrat who will be personally blamed if the override goes wrong, and who would much prefer to blame the AI for any difficult decision with a negative outcome.’ (See, N. Bostrom and E. Yudkowsky, “The ethics of artificial intelligence”, in K. Frankish and W.M. Ramsey, The Cambridge Handbook of Artificial Intelligence (2014), at 317).

Finally, as Boothby recognizes, the introduction of highly automated or autonomous weapons systems would turn fast decision-making all the more crucial. AI can reach this type of decision much faster than a human brain and therefore delays would end up costing lives (See, e.g., Elke Schwarz, “The (im)possibility of meaningful human control for lethal autonomous weapon systems” (29th August 2018). Accordingly, it is unlikely that a human on the loop will be awarded ‘sufficient time to intervene’, without losing a critical advantage provided by these systems. In Boothby’s words, ‘rapidly materializing mass attack threats may necessitate decision-making processes that operate at a computer’s, as opposed to at human, speed’. (159)

In sum, Boothby provides a highly interesting conceptual framework within which autonomous weapons systems should be regulated. His argument pushes contemporary debates forward by finding a plausible middle ground between those advocating a strict ban on ‘killer robots’, and those (few) who seem to have unlimited confidence in this type of technology. Nevertheless, I believe there is still more work to do before concluding that a human on the loop will make the employment of autonomous weapons systems safe enough to be used both in armed conflict and in times of peace. This is of course not per se an argument against autonomous weapons systems, but rather an important challenge to their acceptability being advocated on the basis of the existence of this particular safeguard.

Print Friendly, PDF & Email
Topics
Books, Courts & Tribunals, Environmental Law, Featured, Foreign Relations Law, General, International Criminal Law, International Human Rights Law, International Humanitarian Law, Law of the Sea, National Security Law, Symposia, Trade & Economic Law, Use of Force
No Comments

Sorry, the comment form is closed at this time.