Some Thoughts on Negotiating a Treaty on Autonomous Weapon Systems

Some Thoughts on Negotiating a Treaty on Autonomous Weapon Systems

[Maziar Homayounnejad is currently a PhD researcher at the Dickson Poon School of Law, King’s College London. His research primarily focuses on law of armed conflict aspects of autonomous weapon systems, with a secondary focus on arms control and non-proliferation.]

On November 13-17, 2017, the UN, acting under the auspices of the Convention on Certain Conventional Weapons (CCW), convened its first Group of Governmental Experts meeting (GGE) on lethal autonomous weapons systems (LAWS). After three detailed but informal meetings in 2014, 2015, and 2016, there were strong sentiments that mere informative discussion had run its course, and that the time was right to proceed with a more formal mandate, to “explore and agree on possible recommendations on options related to emerging technologies in the area of LAWS”.

Once confirmed, this raised expectations amongst some non-governmental organizations (NGOs) that a ban on ‘killer robots’ may follow. However, as other commentators noted at the time, the formal mandate made no reference to negotiating a LAWS treaty, and this was clearly a result of divergent views within the CCW’s membership over how best to deal with LAWS. So, it may have been over-optimistic to expect anything other than continued talks, as a result of this move.

Fast-forward to the first GGE last November, and it remains clear that sharp divisions between States still bedevil the diplomatic process. As Denise Garcia explained in a recent piece, there are at least three groups of States with divergent positions.

  • Those, like China, Russia, and the US, which oppose a ban or any specific regulation in the near-term, but instead want continued talks on more basic issues, like arriving at a proper definition of LAWS.
  • A group consisting mainly of EU States, which advocates a path towards a politically binding agreement where the concepts of ‘autonomy’ and ‘human control’ serve as a foundation for future discussion.
  • The so-called Non-Aligned Movement, which consists of a large number of diverse States, and which tends towards either a ban treaty, or at least moratoria on the production and use of LAWS.

With significant States taking one position, and a group of smaller but much more numerous States going the opposite way, it is difficult to imagine any binding solution in the near-future.

Making matters worse is the rapid and unpredictable pace of technological change, which makes conventional attempts to regulate weapons particularly problematic when applied to LAWS; akin to trying to pin down a moving target. As Paul Scharre writes, much of the technology of AI and autonomy has gone from science-fiction to viable concept, just in the three years since the CCW began informal talks. On the one hand, this should not be a complete surprise, as the “perfect storm of parallel computation, bigger data, and deeper algorithms”, which is giving rise to stronger AI, was apparent even in 2014. Yet, the precise level of success and its sheer speed of arrival was not easily foreseeable back then. Now, neural networks can beat humans at poker and Go, and a genetic algorithm has triumphed over a human fighter pilot in a simulated aerial dogfight. The Report of the GGE also acknowledged this instability (see paragraph 16e of the main Report, and paragraph 29 of Annex II), which formed part of its reasoning for extending formal talks into 2018 (paragraph 17a).

Looking ahead, there are still some significant weaknesses in AI, but the pace and unpredictability of technological progress will very likely accelerate, and this may or may not resolve these shortcomings. In particular, developments in neuroevolution and newer applications of quantum physics in the defense and national security sphere seem set to create major technological disruption. Not surprisingly, this has raised legitimate questions on how best to approach a lagging (and somewhat divided) diplomatic process for regulating LAWS. The question is all the more important, now that the first week of the 2018 GGE has been confirmed and brought forward to February.

One solution – suggested by Scharre, and also several State delegations at the November GGE – is to move the focus away from the technology, and back to the one constant in war: the human. Namely, even if the technology was able to perform every task in the targeting process, what decisions do we believe still require uniquely human judgment, and why? Hence, what role would we still want humans to play in the application of lethal force? This argument is not necessarily new, but was advanced in various forms throughout the earlier informal meetings, not least by London-based NGO Article 36 in its ‘meaningful human control’ concept. The difference now is that it carries greater weight and urgency because of the bewildering pace of technological change, which will likely render any tech-specific instrument obsolete by the time it comes to be ratified.

On the other hand, Rebecca Crootof is less sanguine about a purely tech-neutral approach to LAWS regulation. In a recent Twitter discussion, she noted there are pros and cons to both tech-specific and tech-neutral approaches, and that a robust regime should incorporate both, to address known and unknown issues alike. Conversely, Crootof argued, if we restrict our focus to tech-neutral questions, we lose the opportunity to address the specific known problems.

Accordingly, the dual LAWS problem at the CCW would seem to be a) a deeply divided membership, and b) rapid technological change, which causes uncertainty over the continued viability of any negotiations, and over the extent to which there should be a tech-focus. This arguably calls for a departure from the standard CCW approach to weapons regulation.

In a recent paper published by the Transnational Law Institute at King’s College London, I examine various ways to ensure that LAWS can be developed, deployed and used in compliance with international humanitarian law. Specifically in relation to development (at pages 40-48), I argue in favor of an approach modelled on the Convention on Cluster Munitions (CCM). This imposes a strict and unambiguous ban in Article 1, with a very wide scope of application that appears to spell the death knell for ‘cluster munitions’. Interestingly, the CCM proceeds in Article 2(2)(c) to allow for technical developments, which the chapeau to the Sub-Paragraph presumes will “avoid indiscriminate area effects and the risks posed by unexploded submunitions”. This it does by excluding from the definition of the (prohibited) ‘cluster munition’ weapons that cumulatively possess five specific technical characteristics aimed at improving their reliability and accuracy (see technical criteria). According to the CCM Commentary, these criteria should avoid or sufficiently reduce the likelihood that (sub)munitions will create significant humanitarian problems.

The dual humanitarian problem of cluster munitions (as also gleaned from the second preambular clause) is understood to be “indiscriminate area effects” at the time of use; as well as the “risks posed by unexploded submunitions” when they fail to function as intended, or when they are left abandoned. By articulating these two problems that the subsequent technical characteristics are intended to avoid, the chapeau to Sub-Paragraph (c) serves an important dual role. It provides:

  • A justification for the exclusion of weapons that meet the five technical criteria; and
  • A potential mechanism for determining if these technical criteria function as intended (paragraph 2.120, CCM Commentary).

Namely, the chapeau links the definition of what is prohibited to the humanitarian effects that are the basis for prohibition and, as such, is an important legal innovation. While cluster munitions are not designed to create these humanitarian problems, Paragraph (2)(c) stipulates that (sub)munitions must be deliberately designed to avoid such effects if they are to escape prohibition. Accordingly, the Sub-Paragraph as a whole takes both a design-led and an effects-based approach, via inclusion of the technical criteria and the chapeau, respectively.

Importantly for the 2018 GGE, this turns out to be the most ‘LAWS-relevant’ part of the CCM. To transplant it into a LAWS treaty would enable lawyers to define the legal and humanitarian standards that autonomous technologies must reach to fully comply with IHL, leaving the programmers and engineers to try to build those systems. Should the state of technology fail to reach the prevailing legal standards, there will be a de facto ban on LAWS. Conversely, if and when the relevant technologies are able to perform to those standard, they will potentially be lawful. Accordingly, this approach may also help to allay some of the fears of the ban proponents, while also affording the more hesitant States an opportunity to demonstrate what specific technologies may be consistent with humanitarian standards, while offering genuine military utility; consistent with the well-established precautionary principle.

Thus, by drafting a rule similar to Article 2(2)(c), CCM, a LAWS regulation treaty could bring clarity in several ways.

  • Firstly, it can articulate the humanitarian risks posed by LAWS that are poorly designed, or otherwise not fit for purpose (similar to the CCM’s chapeau). These might include, for example, the ‘risk of indiscriminate attack’, ‘distinction failure’ and ‘insufficient civilian risk mitigation’, amongst others. In turn, this would provide a legal basis for the presumed permissibility of LAWS that are deliberately designed not to pose such risks.
  • Secondly, the rule can set specific technical criteria. Mainly, these will consist of baseline technical requirements for sensory, processing and computational capabilities, which are deemed necessary to obviate the humanitarian risks identified (similar to the CCM’s technical design criteria). However, it can also lay down specific context-based programming requirements (such as ‘conservative use of lethal force’); stipulate appropriate shut-off capabilities; and it can mandate intelligent reversion to remote piloting, where appropriate.
  • Finally, as LAWS are yet to be used in battle, the technical requirements and capabilities can be periodically compared with the statement of humanitarian risks, to ensure that they function as intended (similar to the second role of the Article 2(2)(c) chapeau). If they do not, it may be possible to amend the technical criteria at regular intervals, for example, using evidence-based data presented to a Meeting of State Parties or Review Conference (paragraph 2.38, CCM Commentary). Arguably, even in the intervening periods, there can be a duty on State Parties to do everything feasible to gauge the humanitarian effects of a given LAWS (using onboard sensors), and to refrain from continuing deployments in the face of clear evidence of humanitarian harm.

Of course, another compelling reason for periodic review and amendment of the technical criteria is the rapid and unpredictable rate of change of technical progress, outlined above. It is not inconceivable that the current state-of-the-art in LAWS-relevant technologies might appear relatively basic in five years’ time. Thus, it would be beneficial for the continuous improvement of humanitarian standards to keep the state of technology under review, and to update the technical criteria accordingly; notwithstanding the possibility that extant criteria may already meet the chapeau’s humanitarian standards.

Print Friendly, PDF & Email
Topics
Featured
No Comments

Sorry, the comment form is closed at this time.