Guest Post: Autonomous Weapons at Chatham House: It’s Bentham versus Kant

Guest Post: Autonomous Weapons at Chatham House: It’s Bentham versus Kant

[Charles Blanchard served as General Counsel of the U.S. Air Force from 2009-2013, and General Counsel of the Army from1999-2001, is currently a partner at Arnold & Porter LLP.  He was a panelist at a Chatham House conference on autonomous weapons.]

In the past year, proposals for an autonomous weapons ban have gone from a fringe notion to an agenda item for the Convention on Conventional Weapons this year.  Last week, I joined a diverse group at a Chatham House conference to discuss the issue.  The participants included advocates of a ban, technologists on both sides of the issue, government officials, and Law of War experts.  The conference was surprisingly useful in illuminating lots of common ground, and more importantly it illuminated the different philosophical differences between proponents and skeptics of a ban.

The degree of agreement was actually surprising, but also very helpful:

  • There was broad agreement that except in very unique battle spaces (where the likelihood of civilians was nonexistent), deployment of autonomous weapon systems today would not be consistent with the requirements of International Humanitarian Law (IHL).  One panel concluded that the current technological limits, when combined with IHL, has effectively created a temporary moratorium on completely autonomous weapon system deployment.
  • Even the proponents of a ban would not oppose completely autonomous weapon systems that targeted other machines—such as might be the case with missile defense systems in the future—assuming that the technology develops sufficiently to meet IHL requirements. To be clear, however, the proponents of a ban would oppose even a purely defensive system that targeted tanks, airplanes or other platforms with human beings on board.
  • While the proponents of a ban advocate for a ban on development, they seem to agree that a very narrow definition of “development” is appropriate.  They would not attempt to limit research and development of even dual us civilian uses of autonomy, and would not even oppose development of semiautonomous weapons technology as long as a human remains in the loop.  The development ban would be imposed on the creation of completely autonomous systems.

So what was the disagreement?  It centered on the following question:  if technology ever developed to the point that machines were more capable than humans in complying with IHL than humans, should autonomous weapons be banned?  The skeptics of a ban, such as me, argued that it would be troubling to accept more civilian casualties that would then result from a ban.  The proponents, on the other hand, argued that it would violate notions of human dignity to let a machine to decide who to kill.

While one advocate calls the aversion to allowing machines to kill the “yuck” factor, a panel of ethicists and philosophers illuminated what this difference is really about:  the skeptics (like myself) are advocating a utilitarian ethical scheme (the greatest good for the greatest number), while the proponents are applying Kant’s categorical imperative that no human being should be used as an instrument.  While a utilitarian would be focused on whether civilian (and military) casualties would be less if autonomous weapons were used, a Kantian would object to the removal of humans from the lethal decisionmaking altogether.  One panelist noted that Germany’s highest court had rejected a purely utilitarian view in overturning a law that allowed hijacked planes to be shot down.  Even though the passengers on the plane would likely die anyway, and shooting down the plane would save other lives, the German court concluded that the statue violated human dignity.

So what are we to make of this philosophical dispute?  To a great degree, warfare and the laws of war have arisen out of utilitarian philosophical frameworks.  Indeed, one could argue that the very use of military force against other human beings (even for a righteous cause) violates Kant’s categorical imperative.  And the IHL concept of proportionality—that attack on a military target is permissible even if there will be civilian casualties as long as the civilian casualties are “proportional” to the military value of the target—is expressly utilitarian and inconsistent with Kant.

Nonetheless, the development of IHL is itself a history of a battle of humanitarian concepts (Kant) against utilitarianism. Arguably, the Geneva Gas Protocol in 1925 was a victory of humanitarian concepts over pure utilitarianism. And the more recent bans on land mines and cluster munitions are also clearly motivated by nonutilitarian concepts.

Now obviously there is much more at stake here than this narrow philosophical dispute.  Purely utilitarian concerns about strategic stability—based on the fear that purely autonomous systems will make it more likely that countries will go to war—can also lead to concerns about these weapons.  But at their root, the debate here is whether the principle that only humans should decide to kill other humans is sufficiently important that we are willing to accept more death as a result.

Print Friendly, PDF & Email
Topics
Featured, International Human Rights Law
Notify of
Mark Avrum Gubrud

Mr. Blanchard accuses proponents of an AWS ban of being “willing to accept more death” than would take place if AWS were allowed to be used. However, this is certainly not our intent. We believe firmly that a world in which killer robot armies confront and fight each other will almost certainly be more dangerous and deadly than one in which nations have recognized and accepted the principles of human control and responsibility in the decision to use violent force, of human dignity and sovereignty, as well as the urgent need to avoid an arms race that points only to destabilization and the loss of human control, and have therefore agreed to stop that before it goes any further. This is not a philosophical debate, it is a matter of human and global security. The fact that the use of robots (both remote controlled and autonomous) lower the political and economic costs of waging asymmetric war against others lacking access to this technology is one way in which they will likely result in more and not less killing. People like to compare drone strikes to carpet bombing, but if drones were not available, would the United States carpet bomb Pakistan,… Read more »

Benjamin Davis
Benjamin Davis

Don’t let Skynet become self-aware – http://youtu.be/dpwWYOE3Y9o
Best,
Ben