Last November, two documents appeared within a few days of each other, each addressing the emerging legal and policy issues of autonomous weapon systems – and taking strongly incompatible approaches. One was from Human Rights Watch, whose report, Losing Our Humanity: The Case Against Killer Robots, made a sweeping, provocative call for an international treaty ban on the use, production, and development of what it defined as “fully autonomous weapons.” Human Rights Watch has followed that up with a public campaign for signatures on a petition supporting a ban, as well as a number of publicity initiatives that (I think I can say pretty neutrally) seem as much drawn from sci-fi and pop culture as anything. It plans to launch this global campaign at an event at the House of Commons in London later in April.
The other was the Department of Defense Directive, “Autonomy in Weapon Systems” (3000.09, November 21, 2012). The Directive establishes DOD policy and “assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems … [and] establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems.”
By contrast to the sweeping, preemptive treaty ban approach embraced by HRW, the DOD Directive calls for a review and regulatory process – in part an administrative expansion of the existing legal weapons review process within DOD, but reaching back to the very beginning of the research and development process. In part it aims to ensure that whatever level of autonomy a weapon system might have, and in whatever component, the autonomous function is intentional and not inadvertent, and has been subjected to design, operational, and legal review to ensure that it both complies with the laws of war in the operational environment for which it is intended – and will actually work in that operational environment as advertised. (The DOD Directive is not very long, and makes the most sense, if you are looking for an introduction into DOD’s conceptual approach, read against the background of a briefing paper issued earlier, in July 2012, by DOD’s Defense Science Board, The Role of Autonomy in DOD Systems.)
In essence, HRW seeks to ban autonomous weapon systems, rooting a ban on autonomous lethal targeting per se in its interpretation of existing IHL while calling for new affirmative treaty law specifically to codify it. By contrast, DOD adopts a regulatory approach grounded in existing processes and law of weapons and weapons reviews. Michael Schmitt and Jeffrey Thurnher offer the basic legal position underlying DOD’s approach in a new article forthcoming in Harvard National Security Journal, “‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict.” They say that autonomous weapon systems are not per se illegal under the law of weapons and that their legality or restrictions on lawful use in any particular operational environment depends upon the usual principles of targeting law.
I think Schmitt and Thurnher have it right as a legal matter, but there are important dissenting voices. A contrary view is offered by University of Miami’s Markus Wagner in, for example, “Autonomy in the Battlespace: Independently Operating Weapon Systems and the Law of Armed Conflict” (chapter in International Humanitarian Law and the Changing Technology of War, 2012). New School for Social Research professor Peter Asaro (who is not a lawyer, but a philosopher of technology, thus establishing himself as having the Coolest of Jobs, and also co-founder of an organization that has been calling for a ban for several years) has offered a reading of Protocol I and other IHL treaties aiming to show that human beings are built by positive, if tacit, assumption into these texts and their approach to weapons and targeting (forthcoming special section of the International Review of the Red Cross). Asaro is careful to hold out only that this interpretation is implicit, rather than explicit – a thoughtful and creative reading, though not finally one that persuades the hard-hearted lex lata lawyer in me. A debate is underway in academic law and policy – and in the Real World. It promises to heat up considerably.
Some months prior to these two documents making their appearance, however, Matthew Waxman and I published a short policy paper in the journal Policy Review, “Law and Ethics for Robot Soldiers.” It made note of arguments by those favoring a complete ban, but mostly focused on the United States (as well as other technologically advanced states; the US is far from the only country doing cutting-edge robotics, in weapons and many other things) and the possibility of developing weapon systems that might move from “automated” to “autonomous.” That paper endorsed a regulatory approach to these weapon systems, embracing transparency of standards, best practices in weapons reviews, close interaction between the lawyers and engineers from the beginning of weapon system design, etc. The Policy Review essay was devoted to setting out the problem for a lay audience not having much prior knowledge, however, and oriented toward policy and process issues by which DOD would formulate policy, conduct legal reviews, and how it would deal with other states and their weapon development policies. It was not primarily directed to arguments for or against a sweeping ban, since HRW had not yet launched its Killer Robots campaign.
Since then, however, Matt and Ken have been busy. And we’re pleased to announce that the Hoover Institution has just published our new policy essay, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can. It revises and substantially extends our arguments on autonomous and automated robotic weapons, and shifts the focus of argument to address the ban arguments more directly. Though longer than our first essay, it is still not long (at some 12,000 words) and intended to be readable by a general audience, not an academic one. It is available at SSRN, here (and the same pdf at the Hoover Institution website, here).