Professor Charli Carpenter (of UMass Amherst Political Science Department) and I had a lovely conversation over the weekend about battlefield robots. Well, actually it was an interview for a project of hers, so she let me do pretty much all the talking, which was lovely for me. She has now posted some thoughts of her own, in two highly interesting, highly recommended (for that small chunk of the world interested in battlefield robotics and the law and ethics of war, anyway) posts at two different blog sites. I talk a little about them in what follows here. You don’t have to be interested in battlefield robots to be interested in these posts – they address very important fundamental questions in the laws of war and the norm entrepreneurs who try to influence them.
When it comes to battlefield bots and the law, you can take satisfaction that you will have Heard It Here First, unless, of course, you also read Instapundit. As I’ve said in earlier posts on this subject, the vast, vast majority of the research into battlefield robots has nothing to do with autonomous weapons firing platforms – which is, of course, where the biggest ethical and legal issues arise – but with surveillance and independent target scanning and identification. But there are some other possible roles for robotics on the battlefield, including things like extraction of the wounded or delivery of supplies. A lot of the interest is less about autonomous battlefield robots as such than the multiple uses of unmanned vehicles on the battlefield.
Battlefield Robots as a Technological Response to ‘Lawfare’, and the Limits to Technological Counters to Bad Behavior
The Ethically Ideal Autonomous Battlefield Robot as Ethically Ideal Human Soldier? And What Is the Moral Worth of a Human Soldier’s Life?
Many of us who work in the areas of laws of war and armed conflict have been watching the development of technology because, if history is any guide, changes in technology are a big, quite possibly the biggest, long-term, historical driver of changes in the laws of war. The development of the musket, cross-bow, airplane, machine gun, and so on. We are in an era of accelerated technological change, and naturally many of us are scanning the horizon to identify the technologies that will cause potential disruption and change in international humanitarian law.
One of these areas is cyberwarfare, and I have lots of friends who are taking advantage of their backgrounds in cybertechnologies to consider the question of whether new rules, or adaptations of old rules, will be needed to address, for example, the questions of dual use (civilian and military) of the internet in war. My own interest is somewhat different; I have been drawn, over the last year or two, into questions of robotics on the battlefield (and what follows is very loosely taken from my draft paper and several posts at my blog).
Battlefield robotics is in fact well underway. It is driven by a couple of different pressures. One is the US military’s constant search for force multipliers – a capital intensive, indeed the most capital-intensive military in the history of the planet – looking for ways to use technology to make each individual soldier more effective. A second is the search for force protection – individual soldiers in the Western armies are about as far from cannon fodder as can be, and ways to protect them individually are an important priority. A third, to which I will return in a later post, is the use in asymmetric warfare of violations of the existing laws of war by the enemy, hiding in and among civilians, civilian shields, etc., to which the US military seeks to respond with technological counters.
Considered from an ethical and legal perspective, battlefield robotics has several layers, introducing new ethical and legal questions at each step. The first is the use of robots for observation and surveillance. This does not really raise significant questions, and indeed performs a vital role in allowing the use of battlefield force to be more discriminating. But they have remarkable technological capabilities – miniature spy insects that can fly, act in groups, and are autonomous in the sense that they do not require a human operator to figure out in real time where to go and what to do. Here is a photo computer rendering of a spy spider:
A second layer carries robotics beyond surveillance, and into the use of weapons. Air platforms such as the Predator drone, initially used for surveillance but now equipped with missiles and other weapons, are also well underway. (Bryan M. Carney, “Air Combat by Remote Control,” Wall Street Journal, opinion, Monday, May 12, 2008, is a good short, newspaper article introduction, but there are lots of press articles out there. In general, a good way to keep a newspaper-level handle on the technology development is to read Popular Mechanics robotics columns online.)
The legal and ethical feature of these machines, however, is that they are remote-operated. In that sense they are robotic, but they are operated by a human being in real time – even if that human being is somewhere far away. The military efficiencies in using drone aircraft are hard to overstate – smaller, cheaper, but also do not require having a large part of your air crews and equipment down for rest and maintenance for as long a time as required with humans. You can keep a surveillance drone in the air for long periods of time … there are simply enormous advantages to remote operated aircraft. But again, given that the machine’s weapons are operated in real time by a human being, the ethical and legal questions are not so many (there are some, but I will skip over them). Here is a US Air Force photo of a Predator:
A third layer carries robotics from air drones to the ground. The US military in 2007 deployed for the first time a remote operated ground vehicle with a weapon mounted on top to Iraq for field testing. (It has also been withdrawn again for further work.) Here is a photo of the SWORDS system:
The evolution of this machine is striking – it evolved from the technology developed for use in dealing with landmine and IED removal. The difference is that it now has a weapon mounted on top. But again, it is remote operated by a human being in real time.
Where the rubber meets the road, ethically and legally, of course, is the fourth layer in battlefield robotics development – the development of autonomous battlefield robots, robots that are not simply remote operated by a human in real time, but robots that are programmed with independent decision-making in the use of weapons. We are a long way from that point, if we ever actually reach it, and there can be lots of arguments that it is a line that should not be crossed. But there is no question that this is the direction of technological research.
Part of the reason is that autonomous robot decisionmaking is not merely a feature of military research; it is a central proposition of robotic research generally, and these military applications are spinoffs from a central R&D drive. In a certain sense, indeed, the development of autonomous battlefield robots, with independent control over weapons, draws upon the obverse, for example, of Japanese work into caregiver robots for the elderly: the decision by a robot caregiver whether or not to call 911 is not unrelated to the decision by a battlefield robot whether or not to fire a weapon.
Let me hold off for future posts this week specific questions about law and ethics when it comes to robot soldiers (I know, I know, robotics experts don’t like that kind of lurid term, but I find it irresistible, not least because editors do everywhere). One set of issues involves the questions of how you translate the criteria that human soldiers must use – e.g., is it a legitimate target and what is the proportionality calculation? – into something usable by a machine. But a second set of issues asks whether the use of robots involves anything more than the successful translation of existing laws of war principles into a machine-applicable language. Is there anything different about it being a machine? Or is the problem of autonomous battlefield robots, as a matter of law and ethics, simply one of translation – to try and achieve how the ideal soldier would behave? These are some of the questions I want to take up later this week about genuinely autonomous battlefield robots.
Meanwhile, if you would like some further reading, some of the most fascinating work in the area of ethics and law applied to autonomous battlefield robots is being done by Professor Ronald Arkin at Georgia Tech. (He is completing a book on the subject due out, I believe, next year, and to judge from his several papers, it should be very, very interesting.)
Here are links to two reports from Professo Arkin dealing with the ethical and legal issues and their translation into machine programming:
And finally, Jason Borenstein, also at Georgia Tech, has a very interesting new paper out in a Bepress Journal on ethics and autonomous battlefield robots – requires a Bepress subscription, but here is the abstract page.
With this as an introduction, I will put up some additional posts going to particular ethical and legal issues that I see arising in the development of autonomous battlefield robots. (Thanks Chris and Peggy for help with the images! And welcome Instapundit readers and thanks Glenn Reynolds for the Instalanche! I will be adding posts over the week dealing with issues of ethics and law re autonomous battlefield robots, and will link them in a post chain – check back over the week if you’re interested.)