Autonomous Weapon Systems and Regulation – A Brief Bibliography

Autonomous Weapon Systems and Regulation – A Brief Bibliography

Well, bibliography is too grand for what I’ve done over at Lawfare, which is put up a list of articles and reports, with links and a brief description, of documents currently in the debate over the regulation of autonomous weapon systems.  I will update it periodically – I won’t start adding older documents, but as new things come out I’ll add them.

To recap the state of the discussion, however … coincidentally, I’m sure, Human Rights Watch/Harvard Law School International Human Rights Clinic launched their report on autonomous weapon systems, “Losing Humanity: The Case Against Killer Robots,” the same weekend that the Defense Department issued a DOD Directive, “Autonomy in Weapons Systems.” We’ve talked about the HRW report here at OJ some – it is both a report and a set of recommendations calling for a multilateral treaty that would prohibit the “development, production, and use” of autonomous weapons systems.  To judge by its reception in the international NGO community, it seems to be a call for the landmines ban campaign of the 1990s, redux.  The DOD Directive, for its part, calls for integrated review of weapons systems as they acquire more automated features, as well as other things such as training of DOD personnel, to ensure that humans retain the “appropriate” level and kind of role suitable to the system and its use. The HRW report and the DOD directive are headed in very different directions – this is something of an understatement.

After the HRW report appeared, the Naval War College’s Mike Schmitt produced and posted to SSRN a short response to it, “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics.”  Mike frames his critique of the report as an argument very much from LOAC – arguing that IHL does not prohibit autonomous weapons or weapon systems as a category of weapon or (other requirements of law being met) their use as such.  He also walks through the process by which DOD reviews weapons for their legality as to both weapon and use.  By contrast, Matt Waxman’s and my critique at Lawfare, as well as Ben Wittes’ separate critical post, were much more about policy and, especially, questioned the many factual premises of the HRW report.  These run from HRW thinking it can predict the empirical outcomes of technology over a long run of time, to HRW’s remarkably self-confident factual assertions on the superiority of human emotions in controlling targeting and firing of weapons, empathy over fear.

Ultimately, however, Mike, Ben, and Matt and I come to the same general conclusion – the HRW preemptory ban call is not likely to gain very much traction and, in our various separate ways, each of think it ought not to because it’s wrong in principle, and in any case this brief and factually speculative report simply can’t support the kind of sweeping recommendations it finally makes. However, Tom Malinowski has responded on behalf of HRW to Matt and me, and separately to Ben, at Lawfare. Meanwhile, the final published version of Matt Waxman’s and my “Law and Ethics for Robot Soldiers,” of which we had posted a working version with footnotes at SSRN, appeared in the new issue of Policy Review.  Finally, because of arguments over the definitions of autonomy and automation, I included in the bibliography a very useful article appearing in 2013 from William Marra and Sonia McNeil, “Understanding ‘The Loop’: Regulating the Next Generation of War Machines,” 36 Harvard Journal of Law and Public Policy 3 (2013), which also appeared as a working paper in the Lawfare Research Paper Series 1-2012.

That’s quite a flurry of activity.  I’ll also add to the list an article on December 3, 2012 in the Guardian by the prominent artificial intelligence scientist Noel Sharkey, who has been pressing for just such an international ban campaign for years and who has served as something of the intellectual inspiration and adviser behind HRW’s embrace of the whole ban treaty agenda.  I don’t share Professor Sharkey’s views (with some I disagree on principle and with others, such as the factual future of technology, I’m agnostic, but unwilling to give up the possible benefits and certainly not sympathetic to HRW’s ban proposals). But he is the most persuasive voice for the ban campaign (as well a model of grace and good humor in debating this, which is no small thing), and I’m much looking forward to meeting him at a January conference on artificial agents at the University of Virginia.  (Update: I’m only including current stuff, going forward, rather than trying to go backwards and generate a true bibliography.  But you’ll find that the notes to most of these pieces, where they have them, point to a lot of useful background materials legal, moral, policy, strategy, and technology.)

Print Friendly, PDF & Email
General, National Security Law
Notify of
Mark Gubrud

Arguments about whether machines empowered to independently decide whom to kill and when, or when and against what targets to apply violent force, can or cannot be made consistent with the ancient law of war, ignore most of the threat posed by autonomous weapons to human security and sovereignty. We are wandering into a new arms race which we will not be able to control, involving weapons which we will literally not control.

This is already unacceptable to most people: as Human Rights Watch puts it, “the thought of machines making life-and-death decisions previously in the hands of humans shocks the conscience.” It is notable that the pro-bot writers cited here have largely ignored this aspect of HRW’s case against killer robots, as well as the Martens Clause which gives it legal force.

Drawing a red line at machines deciding the use of force is a natural way to avoid a dismal and dangerous future, and those opposed to doing so should answer a simple question: Is there any point at which they would draw the line, short of ceding all command authority to some future supercomputer? At what point, then?

Ian Henderson
Ian Henderson

Great stuff. Much appreciated. The US Naval War College has an upcoming workshop on the legal issues associated with autonomous weapon systems, so expect a few more updates to your list!


[…] two posts, one on Opinio Juris and the other on LawFare, Kenneth Anderson has summarized the leading artciles, reports and blog […]