Guest Post: Autonomous Weapons at Chatham House: It’s Bentham versus Kant

by Charles Blanchard

[Charles Blanchard served as General Counsel of the U.S. Air Force from 2009-2013, and General Counsel of the Army from1999-2001, is currently a partner at Arnold & Porter LLP.  He was a panelist at a Chatham House conference on autonomous weapons.]

In the past year, proposals for an autonomous weapons ban have gone from a fringe notion to an agenda item for the Convention on Conventional Weapons this year.  Last week, I joined a diverse group at a Chatham House conference to discuss the issue.  The participants included advocates of a ban, technologists on both sides of the issue, government officials, and Law of War experts.  The conference was surprisingly useful in illuminating lots of common ground, and more importantly it illuminated the different philosophical differences between proponents and skeptics of a ban.

The degree of agreement was actually surprising, but also very helpful:

  • There was broad agreement that except in very unique battle spaces (where the likelihood of civilians was nonexistent), deployment of autonomous weapon systems today would not be consistent with the requirements of International Humanitarian Law (IHL).  One panel concluded that the current technological limits, when combined with IHL, has effectively created a temporary moratorium on completely autonomous weapon system deployment.
  • Even the proponents of a ban would not oppose completely autonomous weapon systems that targeted other machines—such as might be the case with missile defense systems in the future—assuming that the technology develops sufficiently to meet IHL requirements. To be clear, however, the proponents of a ban would oppose even a purely defensive system that targeted tanks, airplanes or other platforms with human beings on board.
  • While the proponents of a ban advocate for a ban on development, they seem to agree that a very narrow definition of “development” is appropriate.  They would not attempt to limit research and development of even dual us civilian uses of autonomy, and would not even oppose development of semiautonomous weapons technology as long as a human remains in the loop.  The development ban would be imposed on the creation of completely autonomous systems.

So what was the disagreement?  It centered on the following question:  if technology ever developed to the point that machines were more capable than humans in complying with IHL than humans, should autonomous weapons be banned?  The skeptics of a ban, such as me, argued that it would be troubling to accept more civilian casualties that would then result from a ban.  The proponents, on the other hand, argued that it would violate notions of human dignity to let a machine to decide who to kill.

While one advocate calls the aversion to allowing machines to kill the “yuck” factor, a panel of ethicists and philosophers illuminated what this difference is really about:  the skeptics (like myself) are advocating a utilitarian ethical scheme (the greatest good for the greatest number), while the proponents are applying Kant’s categorical imperative that no human being should be used as an instrument.  While a utilitarian would be focused on whether civilian (and military) casualties would be less if autonomous weapons were used, a Kantian would object to the removal of humans from the lethal decisionmaking altogether.  One panelist noted that Germany’s highest court had rejected a purely utilitarian view in overturning a law that allowed hijacked planes to be shot down.  Even though the passengers on the plane would likely die anyway, and shooting down the plane would save other lives, the German court concluded that the statue violated human dignity.

So what are we to make of this philosophical dispute?  To a great degree, warfare and the laws of war have arisen out of utilitarian philosophical frameworks.  Indeed, one could argue that the very use of military force against other human beings (even for a righteous cause) violates Kant’s categorical imperative.  And the IHL concept of proportionality—that attack on a military target is permissible even if there will be civilian casualties as long as the civilian casualties are “proportional” to the military value of the target—is expressly utilitarian and inconsistent with Kant.

Nonetheless, the development of IHL is itself a history of a battle of humanitarian concepts (Kant) against utilitarianism. Arguably, the Geneva Gas Protocol in 1925 was a victory of humanitarian concepts over pure utilitarianism. And the more recent bans on land mines and cluster munitions are also clearly motivated by nonutilitarian concepts.

Now obviously there is much more at stake here than this narrow philosophical dispute.  Purely utilitarian concerns about strategic stability—based on the fear that purely autonomous systems will make it more likely that countries will go to war—can also lead to concerns about these weapons.  But at their root, the debate here is whether the principle that only humans should decide to kill other humans is sufficiently important that we are willing to accept more death as a result.

http://opiniojuris.org/2014/03/04/guest-post-blanchard-autonomous-weapons-chatham-house-bentham-versus-kant/

3 Responses

  1. Mr. Blanchard accuses proponents of an AWS ban of being “willing to accept more death” than would take place if AWS were allowed to be used. However, this is certainly not our intent. We believe firmly that a world in which killer robot armies confront and fight each other will almost certainly be more dangerous and deadly than one in which nations have recognized and accepted the principles of human control and responsibility in the decision to use violent force, of human dignity and sovereignty, as well as the urgent need to avoid an arms race that points only to destabilization and the loss of human control, and have therefore agreed to stop that before it goes any further. This is not a philosophical debate, it is a matter of human and global security.
    The fact that the use of robots (both remote controlled and autonomous) lower the political and economic costs of waging asymmetric war against others lacking access to this technology is one way in which they will likely result in more and not less killing. People like to compare drone strikes to carpet bombing, but if drones were not available, would the United States carpet bomb Pakistan, Yemen, Somalia…?
    It is of course possible to construct scenarios in which the use of an autonomous weapon system might result in fewer deaths than the use of soldiers and human-controlled weapons. However, such scenarios are mostly dubious and artificial, and often rest on fatuous assertions, such as the claim that robots can make use of sensors not available to humans (as if humans cannot make use of radar or infrared imagery, etc.) or that robots can take more risks (as if remotely operated robots can’t do the same).
    As to the last point, it is often argued that communications links are vulnerable, but this applies mainly in the case of state-vs.-state warfare, and especially aggressive or preemptive war, where robots might be used for strategic attack.
    Mr. Blanchard does acknowledge the existence of “concerns about strategic stability,” but then quickly sidelines them with the assertion that this is not what “the debate here” is about. I beg to differ.
    Principles of humanity are the strongest foundation for a ban on killer robots, but the most compelling reason why we need one is to avoid another race to oblivion, this time with any number of nations getting in on the game.

  2. Don’t let Skynet become self-aware – http://youtu.be/dpwWYOE3Y9o
    Best,
    Ben

  3. I share Dr. Gubrud’s deep skepticism that even in a few decades will we have autonomous weapons that would be either better than humans at making IHL judgments  or military judgments for that matter.  This deep skepticism is shared by most US military leaders–as the recently announced defense budgets (with no programs of record to build autonomous weapons) shows.
    My purpose in writing this post was not to accuse proponents of a ban of anything, but instead to discuss the deep philosophical viewpoints that underlie some of the disputes over a ban.  At the Chatham House conference, there were several panelists that were quite open that the value of human dignity required a human in the loop on such a momentous decision as who to kill that the ban should be in place even at a time when fewer civilian casualties would result from use of autonomous weapons.
    The strategic stability issues received much less attention at the conference, but they are important to consider.  This is a little surprising because concerns over strategic stability were a major reason why the Department of Defense decided to issue its policy on autonomous weapons.  The authors of that document wanted senior policy makers to consider stability issues before moving forward with autonomous technologies.  This is indeed, an important issue.
    I think these issues are much more complex, however, than Dr. Grubud suggests.  Any new technological development that helps one side in warfare can be very destabilizing.  This was certainly the case of the crossbow in the Middle Ages, the tank after WWI, and precision airpower and stealth  in the last decades.  Indeed, virtually any new military technology will have the effect of decreasing the risk to the armed forces of one side.  That alone cannot be the basis for a ban–and outside of nuclear weapons, efforts to ban weapon technology for stability reasons are been ineffective.
    It is also not entirely obvious that in all geopolitical circumstances autonomous weapons will be destabilizing.  For example, purely defensive systems that can defeat missile and air attacks can actually reduce the possibility of conflict.
     

Trackbacks and Pingbacks

  1. There are no trackbacks or pingbacks associated with this post at this time.