Battlefield Robotics, a Very Brief Introduction

Battlefield Robotics, a Very Brief Introduction

Many of us who work in the areas of laws of war and armed conflict have been watching the development of technology because, if history is any guide, changes in technology are a big, quite possibly the biggest, long-term, historical driver of changes in the laws of war. The development of the musket, cross-bow, airplane, machine gun, and so on. We are in an era of accelerated technological change, and naturally many of us are scanning the horizon to identify the technologies that will cause potential disruption and change in international humanitarian law.

One of these areas is cyberwarfare, and I have lots of friends who are taking advantage of their backgrounds in cybertechnologies to consider the question of whether new rules, or adaptations of old rules, will be needed to address, for example, the questions of dual use (civilian and military) of the internet in war. My own interest is somewhat different; I have been drawn, over the last year or two, into questions of robotics on the battlefield (and what follows is very loosely taken from my draft paper and several posts at my blog).

Battlefield robotics is in fact well underway. It is driven by a couple of different pressures. One is the US military’s constant search for force multipliers – a capital intensive, indeed the most capital-intensive military in the history of the planet – looking for ways to use technology to make each individual soldier more effective. A second is the search for force protection – individual soldiers in the Western armies are about as far from cannon fodder as can be, and ways to protect them individually are an important priority. A third, to which I will return in a later post, is the use in asymmetric warfare of violations of the existing laws of war by the enemy, hiding in and among civilians, civilian shields, etc., to which the US military seeks to respond with technological counters.

Considered from an ethical and legal perspective, battlefield robotics has several layers, introducing new ethical and legal questions at each step. The first is the use of robots for observation and surveillance. This does not really raise significant questions, and indeed performs a vital role in allowing the use of battlefield force to be more discriminating. But they have remarkable technological capabilities – miniature spy insects that can fly, act in groups, and are autonomous in the sense that they do not require a human operator to figure out in real time where to go and what to do. Here is a photo computer rendering of a spy spider:

A second layer carries robotics beyond surveillance, and into the use of weapons. Air platforms such as the Predator drone, initially used for surveillance but now equipped with missiles and other weapons, are also well underway. (Bryan M. Carney, “Air Combat by Remote Control,” Wall Street Journal, opinion, Monday, May 12, 2008, is a good short, newspaper article introduction, but there are lots of press articles out there. In general, a good way to keep a newspaper-level handle on the technology development is to read Popular Mechanics robotics columns online.)

The legal and ethical feature of these machines, however, is that they are remote-operated. In that sense they are robotic, but they are operated by a human being in real time – even if that human being is somewhere far away. The military efficiencies in using drone aircraft are hard to overstate – smaller, cheaper, but also do not require having a large part of your air crews and equipment down for rest and maintenance for as long a time as required with humans. You can keep a surveillance drone in the air for long periods of time … there are simply enormous advantages to remote operated aircraft. But again, given that the machine’s weapons are operated in real time by a human being, the ethical and legal questions are not so many (there are some, but I will skip over them). Here is a US Air Force photo of a Predator:

A third layer carries robotics from air drones to the ground. The US military in 2007 deployed for the first time a remote operated ground vehicle with a weapon mounted on top to Iraq for field testing. (It has also been withdrawn again for further work.) Here is a photo of the SWORDS system:

The evolution of this machine is striking – it evolved from the technology developed for use in dealing with landmine and IED removal. The difference is that it now has a weapon mounted on top. But again, it is remote operated by a human being in real time.

Where the rubber meets the road, ethically and legally, of course, is the fourth layer in battlefield robotics development – the development of autonomous battlefield robots, robots that are not simply remote operated by a human in real time, but robots that are programmed with independent decision-making in the use of weapons. We are a long way from that point, if we ever actually reach it, and there can be lots of arguments that it is a line that should not be crossed. But there is no question that this is the direction of technological research.

Part of the reason is that autonomous robot decisionmaking is not merely a feature of military research; it is a central proposition of robotic research generally, and these military applications are spinoffs from a central R&D drive. In a certain sense, indeed, the development of autonomous battlefield robots, with independent control over weapons, draws upon the obverse, for example, of Japanese work into caregiver robots for the elderly: the decision by a robot caregiver whether or not to call 911 is not unrelated to the decision by a battlefield robot whether or not to fire a weapon.

Let me hold off for future posts this week specific questions about law and ethics when it comes to robot soldiers (I know, I know, robotics experts don’t like that kind of lurid term, but I find it irresistible, not least because editors do everywhere). One set of issues involves the questions of how you translate the criteria that human soldiers must use – e.g., is it a legitimate target and what is the proportionality calculation? – into something usable by a machine. But a second set of issues asks whether the use of robots involves anything more than the successful translation of existing laws of war principles into a machine-applicable language. Is there anything different about it being a machine? Or is the problem of autonomous battlefield robots, as a matter of law and ethics, simply one of translation – to try and achieve how the ideal soldier would behave? These are some of the questions I want to take up later this week about genuinely autonomous battlefield robots.

Meanwhile, if you would like some further reading, some of the most fascinating work in the area of ethics and law applied to autonomous battlefield robots is being done by Professor Ronald Arkin at Georgia Tech. (He is completing a book on the subject due out, I believe, next year, and to judge from his several papers, it should be very, very interesting.)

Here are links to two reports from Professo Arkin dealing with the ethical and legal issues and their translation into machine programming:

http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf

http://www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf

And finally, Jason Borenstein, also at Georgia Tech, has a very interesting new paper out in a Bepress Journal on ethics and autonomous battlefield robots – requires a Bepress subscription, but here is the abstract page.

With this as an introduction, I will put up some additional posts going to particular ethical and legal issues that I see arising in the development of autonomous battlefield robots. (Thanks Chris and Peggy for help with the images! And welcome Instapundit readers and thanks Glenn Reynolds for the Instalanche! I will be adding posts over the week dealing with issues of ethics and law re autonomous battlefield robots, and will link them in a post chain – check back over the week if you’re interested.)

Print Friendly, PDF & Email
Topics
General
Notify of
DensityDuck
DensityDuck

“the decision by a robot caregiver whether or not to call 911 is not unrelated to the decision by a battlefield robot whether or not to fire a weapon.”

Although presumably the robot can communicate with a central database to determine the severity of a patient’s condition. The robot won’t have to “call 911”; it will just monitor the patient’s temperature, pulse, respiration, skin color, level of activity, level of cognition, etc. and summon an ambulance if needed. (Subject to Central Authority approval, of course. Sorry Citizen, but your Utility Class is only Category D, and the Five-Year Budget Plan calls for a reduction in that Category. Please let me know if I can administer a mild pain-reliever drug to ease your expiratory activity. Would you like to fill out a Customer Service survey before you depart?)

Benjamin Davis
Benjamin Davis

Just trust Skynet. “I, Robot” presented the risk of inversion of the three laws. The problem is that can we map in binary logic enough information (X) to deal with the novel battlefield setting (X+1).

Best,

Ben

Arthur
Arthur

> Where the rubber meets the road, ethically and legally, of course, is the fourth layer in battlefield robotics development – the development of autonomous battlefield robots, robots that are not simply remote operated by a human in real time, but robots that are programmed with independent decision-making in the use of weapons.

I think that line was crossed in WW2. Either with the introduction of acoustic homing torpedoes or naval mines that were pressure sensitive and would detonate under large ships and ignore the smaller ships. That second weapon is actually making a decision – just as a guard would make a decision to fire on a human and ignore a stray dog.

Matthew Gross
Matthew Gross

How would one prosecute a robot for war crimes? Even an intelligent, autonomous robot wouldn’t really understand the situation, or anything beyond reacting to certain stimuli.

You could prosecute the people who deployed it, of course, but would robot misbehavior given unanticipated scenarios (or damage to its sensors) really be actionable?

Wouldn’t any action against a robot on such matters be prevented by the fact the robot cannot possibly possess intent?