Charli Carpenter on Battlefield Robots, Lawfare, and Norm Entrepreneurship

Charli Carpenter on Battlefield Robots, Lawfare, and Norm Entrepreneurship

Professor Charli Carpenter (of UMass Amherst Political Science Department) and I had a lovely conversation over the weekend about battlefield robots.  Well, actually it was an interview for a project of hers, so she let me do pretty much all the talking.  She has now posted some thoughts of her own, in highly interesting, highly recommended (for that small chunk of the world interested in battlefield robotics and the law and ethics of war, anyway) posts at two different blog sites.  (I manage to mix up the two posts some in my discussion below.)

The first post, at Complex Terrain Lab, deals with some issues that I’ve raised here at Opinio Juris in the past – viz., that unmanned battlefield robots in considerable part represent an attempt to overcome the asymmetry of war created by an adversary that pays little or no attention to the rules of war and indeed exploits them to its advantage.  Because we have largely given up the idea of reciprocity as a condition for our behavior, we seek a way to overcome the asymmetry – through technology.  Thus, in response to unlawful behavior, we respond by seeking to create new technology.  My point in the earlier post was simply that it is usually the case that an adversary can alter its behavior to maintain the asymmetry faster than we can come up with new technology to counter.  

Charli’s post goes into some other important areas, such as the law governing inherently indiscriminate weapons systems, under Protocol I and elsewhere.  She remarks that I am agnostic on the question of whether the new technologies will eventually turn out to be more or less discriminating than human soldiers, and that is correct.  What I mean by agnostic here is that I do not have any way to know now whether the technologies available decades and decades from now will, for example, be more discriminating than the judgments of human soldiers or less.  It might go different directions.  I do not see that as a reason not to develop the technologies.  

Moreover, as I think I observed in my first post here on this subject, the fundamental logics associated with the decisionmaking is, in any case, not very different from what we will unquestionably seek to develop in such things as care for the elderly or medical decisionmaking.  I see no reason why one would simply announce in advance that a technology must necessarily be incapable of discrimination because a human being is not at the trigger to make the firing decision at that moment.  It depends upon the nature of the technology.  I also do not think that one can simply apply some precautionary principle in any strong sense – but in that I essentially accept Sunstein’s critique of it in Worst Case Scenarios – it is not as if there are not opportunity costs to failing to develop robotic systems that might have  important elements of discrimination at least as good as those of human beings and, moreover, the fact of shifting to a battlefield with fewer human elements on it might have its own benefits.  

Finally, however, I do think that an under-thought factor is that other militaries than the US would likely deploy autonomous firing systems well in advance of having proper discrimination logics in place, if such could even be created; the effect is that the US might find itself having to develop counters to such robotic systems even before it had an autonomous system that it was willing to deploy.

The second post is from the Ducks of Minerva blog, and it addresses the question of norm development in all this.  As a general historical matter, the separation of arms control from humanitarian law made, and makes, a lot of sense to me.  Seeking to control behavior in war by controlling weapons systems has seemed to me pretty difficult as a historical proposition, and too much attention to gas and then landmines seems to me distorting rather than revealing.  The reason is that as a pure moral matter – and even more as a matter of fundamental rights in the way that understanding of the laws of war (largely mistakenly, in my view, but anyway) – going after this weapons system rather than that will always have a certain amount of arbitrariness to it.  There are many ways of getting killed in war; as a moral proposition, the issue is surely behavior rather than tools.  Focusing on tools is a good idea – for negotiators who can approach this not from a standpoint of human rights but as a matter of negotiated, marginal reductions in such things as nuclear inventories.

This shows up in the problem of what makes a weapons system inherently incapable of being aimed.  Whether it is or is not is largely a question of intent, on the one hand, and the limits of technology, on the other.  We use terms like “inherently” but the fact is that we really mean the limits of technology at that moment. This then raises another very difficult problem, which is that if these things are defined by technology, and the availability of technology to one side but not the other – the US versus al Qaeda in Iraq, say – then does the principle of military necessity say that the standards are different for the two sides?  This seems to be where groups like HRW are going, but it is a very dangerous path.  The ICRC seems less headed that way, insisting that the on-the-ground, actual, this weapon-or-that, standard be the same for the two sides – but I wonder how long it will maintain that stance?  As aganst the arguments:

  • first, that the US, because it has the possibility of using more discriminating weapons must therefore use them or be in violation (and that it must also research such weapons, and develop them, and produce them, and have them available in its arsenal, and not use anything else for fear of liability: isn’t that where the international criminal lawyers will drive things in courts fundamentally uninterested in the logistical problems of superpowers?) and, 
  • second, that the insurgent side, because it lacks such weapons, has no obligation to use them.

This is not a good structuring of incentives, to say the least.  I realize that for many people in this field, putting burdens on the US in how it fights is immaterial.  But the ICRC, at least, would take note surely that it is not such a good idea over the long run to erode the concept of reciprocity down to zero.  At some point, one must have great faith that the authority of post hoc tribunals to police the laws of war will be able to replace the parties’ own demands for reciprocity as a mechanism for enforcement of the rules, because when the referees reach the point of saying themselves, one set of rules for you, and another set for you, then the sense of legal obligation to follow any of them is at not insignificant risk of falling.  One trusts that the ICRC, with a time horizon longer than that of the more recent norm-entrepreneurs, would take this into account.  

But one of the problems with entrepreneurs, in norms or anything else, is that they have a tendency to change the product frequently.  Not a problem with consumer products, but in the case of law, entrepreneurial change – especially when change comes wrapped in that special human rights wrapping paper, as it were, absolute, universal, apparently timeless demands issued by Kant-addressing-Eternity, but which then change on a dime, something I once called Human Rights Watch’s tendency to “serial absolutism” – well, entrepreneurship tends to undermine the claim of law.  “Norm entrepreneurship” is always used as a complimentary term – should it always be so, I wonder? 

But one thing that Charli’s posts point out better than I have done here is that people like us, interested in battlefield robotics, are not simply geeks interested in things that run around by themselves and possibly fire weapons via autonomous systems.  The robotics issues, in many ways much more than, say, that other hip and cool new technology issue in the law of war, cyberwar, raise fundamental issues about the combatant-noncombatant distinction, and much, much else besides.  Well, I mean we’re also geeks.  But in a good way.  Like Glenn Reynolds.

Print Friendly, PDF & Email
Topics
International Human Rights Law, National Security Law, North America
Notify of
Charles Gittings

Wow…

“viz., that unmanned battlefield robots in considerable part represent an attempt to overcome the asymmetry of war created by an adversary that pays little or no attention to the rules of war and indeed exploits them to its advantage.”

That’s not an issue, it’s just nonsense and confusion on your part.

Seriously:

What exactly is the problem here?

And what adversary are you talking about?

Dick Cheney?

Your evil twin maybe?

And how exactly did they make the sun stand still while water flowed up hill and the substance of the moon fermented into green cheese??

YOU are smarter than this Ken. Dig it:

The most dangerous weapon in existence is the human mind — everything else is just an accessory.

Now try using yours for something more constructive than self-delusion and sour grapes.