Human Rights Watch Report on Autonomous Weapons, and Matthew Waxman and Ken Anderson’s Critique
Human Rights Watch has released a new report (co-authored by the Harvard Law School Human Rights Clinic) on autonomous weapons systems that might emerge over the next several decades, titled “Losing Humanity: The Case Against Killer Robots.” The report calls for a multilateral treaty that would preemptively ban “development, production, and use” of fully autonomous weapons by all states. It would be hard to be more sweeping than the report’s language in calling a comprehensive ban – here is the language from the recommendations, directed to states:
Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.
It happens that Matthew Waxman and I have a policy essay appearing on this topic in the December-January issue of Policy Review, “Law and Ethics for Robot Soldiers” (the link goes to a special SSRN version with footnotes). While of course sharing the concern that any new weapons system meet the requirements of the laws of war, our conclusions run the opposite direction as Human Rights Watch’s. Over at Lawfare, we discuss reasons why this kind of sweeping, prohibitory approach seems to us both wrong on substance and unworkable in practice. It’s a complicated topic, and I imagine we’ll probably post some more detailed and specific critiques of the report, and discuss it at Lawfare, here at OJ, and at Volokh.