Law and Ethics for Robot Soldiers

Law and Ethics for Robot Soldiers

Lethal autonomous weapons can be approached from two directions.  One is to look from the front-end – starting from where technology stands today, forward across the evolution of the technology, but focused on the incremental changes as and how they occur, and especially how they are occurring now.  The other is to imagine the end-state – the necessarily speculative and sometimes pure sci-fi “robot soldiers” of this post’s title – and look backwards to the present.  Starting with the hypothetical technological end-point – a genuinely “autonomous,” decision-making robot weapon, rather than merely a highly “automated” one – the basic regulatory issue is, what tests of law and ethics would an autonomous weapon have to pass in order to be a lawful system, starting with the fundamental law of war principles, distinction and proportionality?  What would such a weapon be and how would it have to operate to satisfy those tests?

This is an important conceptual exercise as technological innovators imagine and work toward autonomy in many different robotic applications, in which weapons technology is only one line of inquiry.  Imagining the technological end-point as law and ethics means, more or less,  hypothesizing what we might call the “ethical Turing Test” for a robot soldier:  What must it be able to do, and how must it be able to behave, in order to make it indistinguishable for its morally ideal human counterpart?  The idealized conceptualization of the ethically defensible autonomous weapon forces us to ask questions today about fundamental issues – who or what is accountable, for example, or how does one turn proportionality judgments into an algorithm?  Might a system in which lethal decisions are made entirely by machine, with no human in the firing loop, violate some fundamental moral principle?  All these and more are important questions.  The problem in starting with them, however, is that the technology driving toward autonomous weapons is proceeding in little tiny steps (and some important critics, their enthusiasm tempered by earlier promises of artificial intelligence that failed, question whether the tiny little steps can ever get to genuine autonomy) – not gigantic ones that immediately implicate these fundamental questions of full autonomy.

Indeed, the systems being automated first are frequently not the weapons themselves, but instead other parts of the system.  But they might eventually carry the weapons in train.  Thus, for example, as fighter aircraft become increasingly automated in how they are flown – in order to compete with enemy aircraft also becoming more automated – eventually important parts of the flight functions operate faster than humans can.  In that case, however, it looks irresistible to automate, if not make fully autonomous, the weapons systems, because they have to be integrated with the whole aircraft and all its systems.  We didn’t start out intending to automate the weapons – but we wound up there because the weapons are part of a whole aircraft system.

These facts about how technology of automation is evolving are important for questions of regulating and assessing the legality of new weapons systems.  In effect, they shift the focus away from imagining the fully autonomous robot soldier and the legal and ethical tests it would have to meet to be lawful – back to the front end, the margin of evolving technology today.  The bit-by-bit evolution of the technology urges a gradualist approach to regulation; incremental advances in automation of systems that have implications for weapons need to be considered from a regulatory standpoint that is itself gradualist and able to adapt to incremental innovation.  So, Matthew Waxman and I are pleased to announce a new short paper on this topic, Law and Ethics for Robot Soldiers, which takes as its premise the need to think incrementally about the regulation of evolving automation.

The essay’s takeaway on regulation is ultimately a modest one – a quite traditional (at least from the US government’s long-term perspective) approach to weapons regulation.  Grand treaties seem to us unlikely to be suitable to incremental technological change, particularly as they might seek to imagine a technological end-state that might come about as anticipated, but might develop in some quite unexpected way.  Sweeping and categorical pronouncements can re-state fundamental principles of the laws of war, but they are unlikely to be very useful in addressing the highly specific and contingent facts of particular systems undergoing automation.

We urge, instead, a gradually evolving pattern of practices of the states developing such systems and, as part of the process of  legal review of weapons systems, development through reasoned articulation of how and why highly particular, technically detailed weapons systems meet fundamental legal standards.  In effect, this proposes that states develop bodies of evolving state practice – sometimes agreeing with other states and their practices, but likely other times disagreeing.  This seems to us the most suitable means for developing legal standards for the long term to address evolving weapons technology.  Abstract below the fold.

This essay is brief and aimed at a cross-disciplinary audience; Policy Review does not use footnotes, but we have prepared this annotated version for SSRN and a scholarly audience.  Given that no one can be equally expert in all the areas covered by this essay, we are particularly grateful to the many experts who assisted us in writing the draft, and they and others who continue to assist us with comments today, as we look to continue the same topic in other places.  Matt and I have also seen how much interest is developing among international law and national security academics around this topic, as well as around robots and the law more generally, and we’re delighted to be part of it. We welcome substantive comments, here at OJ or as well by email.

 

Lethal autonomous machines will inevitably enter the future battlefield — but they will do so incrementally, one small step at a time. The combination of inevitable and incremental development raises not only complex strategic and operational questions but also profound legal and ethical ones. The inevitability of these technologies comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment.

 

The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe — but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them — U.S. policy for resolving those dilemmas should be built on these assumptions.

 

The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses — such as prohibitory treaties — unworkable as well as ethically questionable. Those features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain — the contours of international law as well as international expectations about appropriate conduct — on which it and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective or dangerous prohibitions — or those who would prefer few or no constraints at all.

 

(Annotated version of an essay to appear in a general interest journal that does not use footnotes in its articles; sources have been added here for scholarly convenience.)


Print Friendly, PDF & Email
Topics
General
Notify of
JordanPaust

Response…

use of spring-guns has long been outlawed in many states and/or the killing of someone by spring-gun has been considered to be murder, or at least manslaughter — so there is some guidance in the old cases.
I suspect that some of the cases addressed use of spring-guns to protect property however, which is different.

trackback

[…] is the original post: Opinio Juris » Blog Archive » Law and Ethics for Robot Soldiers Comments […]

Mihai Martoiu Ticu

The law of command responsibility is enough. The first human in the chain of command of the machines should be responsible for the war crimes of the machines. If the whole army is robotized than the head of state should stand trial. If the head of the state is also robotized, the whole population of the state is responsible. It’s that simple.

Michael W. Lewis

I have yet to meet anyone at any level of command that supports (or believes in) the idea that lethal force will ever be employed without human oversight.  Although military drone development goals include “man-out-of-the-loop” in the next few decades I don’t believe that the military command culture will allow for the automated decision to employ an offensive lethal weapon (CIWS on naval vessels are already examples of automated defensive weapons systems designed to hit incoming missiles, although the decision to turn the CIWS to automatic mode is still made by a human being). Ironically the one way that we might realistically see automated decision-making in the employment of offensive lethal force is if the trend toward expanding IHL restrictions continues.  Read together certain commentaries and ECHR opinions suggest something approaching strict liability for commanders for any harm to civilians.  Should we reach a point that this is accepted as law (not wise in my view, but that is a discussion for another time) commanders will turn to whatever “judgement” they view as being most reliable.  If a commander believes that following the AI or algorithims in a drone is less likely to result in civilian casualties than relying on the judgement of a… Read more »

Mihai Martoiu Ticu

The problem is that U.S. drowns in information and has not enough people to decide (in time) when to kill someone. Therefore the U.S. wants something like “signature strikes”, letting computers decide who to kill. Thus the U.S. seeks a way to fool the world that nobody is responsible. After all it is the dumb robot that did it.

Ian Henderson
Ian Henderson

Automated target recognition systems leave the man in the loop to authorise weapons release, but does that truly ‘value add’ if the operator has no independent means to verify the target as hostile, friendly or unknown? A number of fratricide cases where attributed to Patriot battery systems because the operator trusted the target characterisation provided by the system and, therefore, authorised weapon release.

Ian Henderson
Ian Henderson

A very topical area at the moment. A fair part of my own draft article is about reliability (or verification & validation). Assume for a moment the technology exists to conduct target ID. How confident do you need to be that it will work and to what level of reliability?