06 Nov Autonomous Weapons and a Campaign for a Treaty Ban
The debate over autonomous weapons is not so visible in the United States, but the ban campaign launched by Human Rights Watch a year ago – an international NGO coalition called the “Campaign to Stop Killer Robots” – has been quite active in Europe and at the UN, where a number of countries raised the issue in their statements to the General Assembly’s First Committee (disarmament issues). Matthew Waxman and I have been writing about this issue for several years; we have a short policy paper on the topic available at SSRN, “Law and Ethics for Autonomous Weapon Systems,” and we’re pleased to note our op-ed in the Wall Street Journal on Monday (November 4), “Killer Robots and Laws of War.” We argue against a ban, on a number of grounds (it can be found open access at RealClearPolitics, here). Here are a couple of grafs from midway through the piece (later on I’ll add links to the ban campaign and some other resources; must go teach class!):
[A] ban is unlikely to work, especially in constraining states or actors most inclined to abuse these weapons. Those actors will not respect such an agreement, and the technological elements of highly automated weapons will proliferate. Moreover, because the automation of weapons will happen gradually, it would be nearly impossible to design or enforce such a ban. Because the same system might be operable with or without effective human control or oversight, the line between legal weapons and illegal autonomous ones will not be clear-cut.
If the goal is to reduce suffering and protect human lives, a ban could prove counterproductive. In addition to the self-protective advantages to military forces that use them, autonomous machines may reduce risks to civilians by improving the precision of targeting decisions and better controlling decisions to fire. We know that humans are limited in their capacity to make sound decisions on the battlefield: Anger, panic, fatigue all contribute to mistakes or violations of rules. Autonomous weapons systems have the potential to address these human shortcomings. No one can say with certainty how much automated capabilities might gradually reduce the harm of warfare, but it would be wrong not to pursue such gains, and it would be especially pernicious to ban research into such technologies.
That said, autonomous weapons warrant careful regulation. Each step toward automation needs to be reviewed carefully to ensure that the weapon complies with the laws of war in its design and permissible uses. Drawing on long-standing international legal rules requiring that weapons be capable of being used in a discriminating manner that limits collateral damage, the U.S. should set very high standards for assessing legally and ethically any research and development programs in this area. Standards should also be set for how these systems are to be used and in what combat environments.
Sorry, the comment form is closed at this time.