27 Nov DOD Directive on “Autonomy in Weapons Systems”
At almost the same moment that Human Rights Watch/Harvard Law School Human Rights Clinic released its report, “Losing Humanity: The Case Against Killer Robots,” which called for states to establish a treaty that would prohibit the “development, production, and use” of “fully autonomous weapons,” the Pentagon (under Deputy Defense Secretary Ashton Carter’s signature) issued a DOD Directive, “Autonomy in Weapons Systems.” THE DOD Directive sets out standards and mandates review of autonomy and automation features of rapidly proliferating of “automating” military systems, as they are developed and evolved, to ensure compliance with the laws of war and, more broadly, to ensure that both design and operational knowledge in the field maintain “appropriate” levels of human control in any weapons use. Matthew Waxman and I discussed the HRW report at Lawfare; DangerRoom-Wired’s Spencer Ackerman discusses the HRW report, the DOD Directive, and Matt’s and my approach in our “Law and Ethics for Robot Soldiers.” Benjamin Wittes at Lawfare excerpts some important chunks of the DOD Directive.
Ackerman says of the DOD Directive that the “Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.” The Directive seeks to “‘minimize the probability and consequences of failures’ in autonomous or semi-autonomous armed robots ‘that could lead to unintended engagements’, starting at the design stage.” Its solution – unlike HRW’s call for what its report terms an “absolute ban” – is based upon constant reviews of the military system (unintended effects on weapons systems might occur because of changes to non-weapons systems, after all) – from the inception of design forward. The DOD Directive is intended to be flexible in application and to apply to all military systems, so it relies on a general standard of “appropriate” levels of human control over the system at issue, without specifying in each case what that will mean.
Ackerman adds that Matt Waxman and I should be pleased with the Directive’s approach, and we are. In our “Law and Ethics for Robot Soldiers” article, he notes, we
observe that technological advancements in robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”
Waxman and Anderson should be pleased with Carter’s memo, since those standards are exactly what Carter wants the Pentagon to bake into its next drone arsenal. Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law. The phrase “appropriate levels of human judgment” is frequently repeated, to make sure everyone gets the idea. (Now for the lawyers to argue about the meaning of “appropriate.”)
In one sense, I suppose HRW could say that this is what their report calls for, since it tries to build in a notion of incremental reviews into what a treaty should mandate. But the purpose of these reviews for HRW’s proposal seems to be to indicate when the absolute ban on “development” of autonomous weapons systems is triggered. The HRW report is not, to my reading at least, completely clear on what “development” means in the context of incremental reviews, or in the context of what the report itself calls an absolute ban; it seems to be trying to mix absolute apples with incremental oranges.
The role of incremental reviews for the Directive, by contrast, is not about whether some point triggering an absolute ban has been reached, but instead to determine whether the technological system, at that point in its development, preserves the “appropriate” amount of human control and, in the case of a system in the process of design and development, will continue to do so as development continues to a final system that has be legally evaluated for deployment. This is a quite distinct meaning of “reviews”; it’s certainly not an absolute ban on “development” of systems that, in a world of murky, incremental technological progress might be closing in on human-less autonomy but might not. It’s flexible as applied to incrementally advancing technology, not absolute. It’s also worth pointing out that while there is a fundamental legal standard at issue here – the requirement of legal review of weapons for compliance with the laws of war – most of this is really policy seen as trying to implement law at the front end, particularly with regards to the incremental, and in some cases incidental-but-dangerous, progression of systems that are at the design stage only.
At the end of the day, I think the DOD Directive approach will be that taken by countries, and not just the US, developing automated technologies in weapons and military systems generally. But Matt Waxman and I will have more to say about both these documents and their respective approaches over the next while.
Sorry, the comment form is closed at this time.