“Guilty Robots” in NYT Magazine Ideas 2009 Issue

by Kenneth Anderson

One of my favorite issues of the New York Times Magazine is its “year in ideas” issue, which comes annually in December.  Because OJ is a repository of things related to battlefield robotics and law and ethics, I wanted to flag for your attention the item by Dara Kerr, “Guilty Robots.”

[I]magine robots that obey injunctions like Immanuel Kant’s categorical imperative — acting rationally and with a sense of moral duty. This July, the roboticist Ronald Arkin of Georgia Tech finished a three-year project with the U.S. Army designing prototype software for autonomous ethical robots. He maintains that in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective.

“I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions,” he says.

The software consists of what Arkin calls “ethical architecture,” which is based on international laws of war and rules of engagement.

The “guilty” part comes from a feature of Professor Arkin’s ethical architecture, in which certain parameters cause the robot to become more “worried” about the rising calculations of collateral damage and other such factors.

After considering several moral emotions like remorse, compassion and shame, Arkin decided to focus on modeling guilt because it can be used to condemn specific behavior and generate constructive change. While fighting, his robots assess battlefield damage and then use algorithms to calculate the appropriate level of guilt. If the damage includes noncombatant casualties or harm to civilian property, for instance, their guilt level increases. As the level grows, the robots may choose weapons with less risk of collateral damage or may refuse to fight altogether.

As I have said several times on this blog, and in various talks and presentations, I am agnostic as to whether at some point in the future, robots might prove to be ethically superior to humans in making decisions about firing weapons on the battlefield.  When I say agnostic, I mean genuinely agnostic – it seems to me an open question of where technology goes, and in, say, a hundred years, who can say?  For thing, I do fully imagine that roboticized medicine, surgery and operations, will very possibly have reached the point where it might well be presumptive malpractice for the human doctor to override the machine.  It is not impossible for me to imagine – far from it – a time in which it would be a presumptive war crime for the human soldier to override the ethical decisions of the machine.

But maybe not.  Although I am strongly in favor of the kinds of research programs that Professor Arkin is undertaking, I think the ethical and legal  issues, whether the categorical rules or the proportionality rules, of warfare involve questions that humans have not managed to answer at the conceptual level.  Proportionality and what it means when seeking to weigh up radically incommensurable goods – military necessity and harm to civilians, for example – to start with.  One reason I am excited by Professor Arkin’s attempts to perform these functions in machine terms, however, is that the detailed, step by step, project forces us to think through difficult conceptual issues regarding human ethics at the granular level that we might otherwise skip over with some quick assumptions.  Programming does not allow one to do that quite so easily.

And it is open to Professor Arkin to reply to the concern that humans don’t have a fully articulated framework, even at the basic conceptual level, for the ethics of warfare: “Well, in order to develop a machine, I don’t actually have to address those questions or solve those problems.  The robot doesn’t have to have more ethical answers than you humans – it just has to be able to do as well, even with the gaps and holes.”

Many OJ readers will by now be familiar with Peter W. Singer’s widely noticed Wired for War.  But I would suggest following it up with Professor Arkin’s own new book, Governing Lethal Behavior in Autonomous Robots, particularly now that Amazon has dropped the price from $60 to $40.

I guess I should also add that this discussion is about battlefield robotics in the sense of “autonomous” firing systems – not the current robotics question of human controlled, but remote platform unmanned combat vehicles, Predators and drones.  I will try to put up a post soon noting several new papers on the targeted killing and UCV-drone issues in international law, including new papers on SSRN by Mary Ellen O’Connell, Jordan Paust, and others – I’ll try to do a roundup of recent papers on the subject (once past grading my corporate finance and IBT finals, that is).

Robotics and the Law Panel at Stanford Law School

by Kenneth Anderson

If you are going to be around Palo Alto next Thursday evening, you might consider attending a panel discussion on robotics and law at Stanford Law School.  I’ll be on a panel alongside some very interesting and knowledgeable folks taking up varied aspects of robotics (my particular interest is robotics and war, but the panel will be considering many areas of robotics).  The particulars are below the fold.  (I’ll also be giving a lunch talk/discussion that same day sponsored by various student organizations at SLS specifically on robotics and armed conflict.) (more…)

Cyborg Insects

by Kenneth Anderson

Technology marches on, and here we have a demonstration video, on YouTube and Wired’s Dangerroom, showing how a flying beetle can be implanted with miniaturized neural electrodes that allow the human operator to stimulate muscles that cause it to fly to the right or left.  The applications to the battlefield, counterterrorism, etc., are obvious.

These little cyborgs will eventually, I presume, be deployed, first for intelligence gathering at both the tactical level of, for example, urban battlefields.  Once a way is figured out to load them with a camera or, perhaps, utilizing their own visual inputs, they can be used to figure out who’s the bad guys in an apartment or building.  The possibilities for discriminating targeting go up  lot.  Later, someone might figure out a way to attach a little bomb, so fly up to target, have a human operator make a positive id and then boom.  There’s a strategic use of these cyborgs – to gather intelligence using thousands and thousands of these all processed through a central computer to help identify where terrorists are training or where bin Laden is located or many other surveillance tasks that cannot be accomplished now that everyone knows not to put things where they are visible from satellites.

‘Makers of Military Drones Take Off’:

by Kenneth Anderson

So says the headline of a WSJ news article today (Monday, August 24, 2009, B1, by August Cole), noting that unmanned aircraft – drones such as the Predator to us civilians, although the Pentagon seems to prefer UMV – are transforming not just the military, strategic as well as tactical considerations, but defense contracting.  (PopSci ran a story a little while ago on the training of UMV pilots as well.)  The WSJ article notes that the administration’s fiscal 2010 defense budget request “includes approximately $3.5 billion for unmanned aerial vehicles.”  The demand is robust enough that the Pentagon is reaching beyond the contracting behemooths such as Lockheed and Boeing to smaller manufacturers, such as General Atomics Aeronautical Systems, Inc., which makes Predators.  (General Atomics, GA, is privately held and so there isn’t stock price information, but it’s an interesting company overall.  I would guess, without knowing, that its private equity investors are happy indeed.)  The WSJ article describes some of the basic economics of manufacture, operation, costs, personnel and training costs, etc. of the drones ….

Lyin’, Cheatin’ Robots

by Kenneth Anderson

What are the implications of robotic technology on the battlefield if robots were capable of developing self-evolving capacities to deceive, lie, and cheat?  What are the implications of that for battlefield robotics?  PopSci gives a report of a Swiss robotics experiment with evolving generations of robots engaged in a search-for-yummy-food task but with a twist – an in-built desire to hide and hoard the food source for oneself.  (I think I was too obscure before; let me update by adding … on the battlefield, you might be considered, as a target, the equivalent of really yummy food!)

PW Singer’s Wired for War Discussion at CTLab

by Kenneth Anderson

Complex Terrain Laboratory, where several OJ people sometimes participate, is hosting an online discussion next week on PW Singer’s new book on robotics and war, Wired for War.  We have mentioned this book in the past, and OJ has a number of posts on battlefield robotics in the last year or so.  Singer is participating in the CTLab symposium and, having read his opening post, it looks to be fascinating.  It is a terrific lineup of participants.  That said, let me comment on why robotics is important to discussions here at Opinio Juris …

Deglobalization and the Road to . . . War

by Peter Spiro

Paul Krugman’s Friday column has to weigh heavily on anyone with a 7-year-old boy. The parallels are clear, at least on the back end. Krugman is hardly the first to play the Norman Angell card. Angell’s ill-timed proclamation of the end of war in the run-up to the Guns of August figures prominently in the opening chapter of Walter Russell Mead’s God and Gold; Mark Movsesian was way ahead of the curve, at 18 Cardozo L. Rev. 1092 (1996). Deep globalization didn’t prove much of a trip wire on the way to WWI. See also this snippet from Robert Keohane in the latest Foreign Policy (“In the 1930s, economic crisis led to Nazism in Germany and militarism in Japan. We must not overlook the threat that global economic crisis could again have malign effects on world politics”).

Let’s hope Moses Naim and the rest of us who think that globalization is different this time around have the better of the argument. But even assuming conflict along state lines were a probable result of this crash, as it was for the last, there are still major questions about what form that conflict would take. Could we possibly see a return to massive armies hurtling themselves at each other on defined battlefields? I’d be willing to engage suggestions to that effect, but it seems intuitively unlikely against the backdrop not just of nuclear weapons but also of the battlefield robots that all Ken Anderson fans will be familiar with. (Nor would it look like the asymmetric warfare we saw during Cold War sideshows and now in Iraq and Afghanistan.) So what’s the alternative? The image that comes to my mind, perhaps only metaphorically, is the London of 1984, in which missiles precipitate on a random basis.

The economy — not terrorism — is now the biggest security threat, and thank goodness that this Administration recognizes it. I wonder how their scenario planning (aka war games) is playing out. Maybe I don’t have to worry about my 7-year old in any particular way, though I’m not sure how much consolation that is.

FT Review Essay on Battlefield Robots Books

by Kenneth Anderson

Stephen Cave has a very nice short essay at the Financial Times, reviewing three books on battlefield robotics (“The New War Machine,” March 7, 2009), including a discussion of PW Singer’s new book, Wired for War.

More on PW Singer and Battlefield Robots at Wilson Quarterly

by Kenneth Anderson

Chris mentioned earlier the NPR interview with Brookings Institution scholar PW Singer on his new book, Wired for War.  I am naturally reading the book as we speak, but for those wanting a useful, clear, short take from Singer himself, check out the Winter 2009 issue of the Wilson Quarterly, and Singer’s cover article, “Robots at War: The New Battlefield.”  (The whole article appears to be available at the link.  Hooray!  I’ll be commenting on the article in an invited set of letters that the WQ will publish in the next issue.)

Rise, Robots, Rise!

by Chris Borgen

NPR’s Fresh Air with Terry Gross has a great interview with P.W. Singer of Brookings (and coordinator of the Obama campaign’s Defense Policy Task Force) about his new book concerning battlefield robots, Wired for WarKen and others have written extensively about the use of battlefield robots on this blog and elsewhere, so I won’t re-hash the various legal, moral, and strategic issues here.  (But do take a moment to look at this creepy video of a four legged robot.) Instead, I want to highlight a few interesting points from the Singer interview.

What struck me the most was Singer’s answer to a question as to whether there is any unifying theme to his three books, on of which examines the use of child soldiers, a second the rise of private military contractors, and the most recent on battlefield robots.  His response was essentially that he is concerned about how our assumptions of “what war is” are no longer accurate. In previous eras, if the U.S. was going to enter into prolonged military conflict, we had a declaration of war. We don’t do that anymore.  Our image of the soldier was a uniformed man (typically) fighting on behalf of our country against other uniformed men (typically) in the armed forces of other states. But, as his PMC book has shown, the Bush Administration has actually cut back on what the government handles in combat zones and increasingly outsourced this to private companies. So rather than the citizens of a nation thinking about the possibility of a draft if we engage in war, we maintain an all volunteer force and contract-out for the rest.

Now we are not even using people, but rather machines (some remotely controlled, others actually autonomous) in a variety of missions. We currently have about 5,000 aerial drones in Iraq (some armed, others not) and something like 12,000 ground robots (we started the war with none) that do things ranging from shooting at incoming missiles and mortar fire (!) to sweeping for IEDs (using a robot based on the household cleaning Roomba, believe it or not).  If we are not even sending people to war but machines, how will this affect our decision to go to war?

And, finally, he notes that the people that we are fighting are less and less uniformed soldiers and more and more irregular fighters, including children.

Altogether, he explains that his concern is that the traditional barriers to using force (formal declarations of war, political checks over concerns of sending large numbers of people off to fight, etc.) are being lowered to the point where they are essentially just lying on the ground.  He is worried about what this implies for our proclivity to use military force in the future. My summary doesn’t do his argument justice, you really should listen to the interview.

He also notes that, unlike nuclear weapons, the use of robotics do not require a large industrial base.  It is essentially “open source” technology with commodified components. While there are U.S. soldiers sitting in Nevada remote controlling aerial drones on fire missions, there are people around the world logging into jihadi websites that use webcams in Iraqi streets and allow remote detonation via the Internet of IEDs near those webcams.  Just wait for a target of opportunity. He also points out that the Israel-Hezbollah conflict in Lebanon was perhaps the first conflict where both sides (a state and, importantly a non-state actor) fielded battlefield robots.

However, the most chilling section for me came in the second half of the interview when he described what some people call “oops moments” and the reaction of some of the scientists to these moments. “Oops moments” are when, due to software glitches, heavily armed robots do things like set their sights on their own soldiers (in one case killing eleven 9 South African soldiers) or (in another case where the robot actually did not have live ammo) on a group of dignitaries who were watching a demonstration. Singer described the troubling disconnect that some of the scientists whom he interviewed had in regards to whether in light of this there are any legal or ethical issues in using battlefield robots. One scientist argues that he could not think of any such legal or ethical issues. When pressed about what if a robot consistently kills soldiers on its own side or innocent civilians. he answered: “That’s not an ethical issue; that’s a product recall issue.”

(Sounds like a Ford Motor Company memo concerning the Pinto. I half expected Singer to say that the scientist then said: “Praise your new robotic overlords you glorified monkeys! Praise them!” But, no.)

A troubling look at what may be the future of warfare. I plan on reading the book.

And my apologies to these guys for the title of this post.

John Pike on “Stone-Cold Robot Killers on the Battlefield” in the Washington Post

by Kenneth Anderson

John Pike, of GlobalSecurity.org website, has a provocative op-ed in today’s Washington Post (January 4, 2009, B3) arguing that the evolution of battlefield robots might mean robots as the soldiers that do the killing on future battlefields … For a lot of reasons, I don’t think this is where the evolution of battlefield robots will go, at least in the foreseeable future.  

I’m not, by the way, opposed in absolute principle to robots on the battlefield that might eventually make autonomous firing decisions.  It is a question of what the technology of the future is able to do and not do.  I just don’t think that is really what current robotics efforts are about in the US military – far from finding ways to replace the human shooter with a robot shooter, the effort today is to roboticize everything but the human shooter.  

But who knows what future technology might be able to do, superior to human decision-making ability in the stress of battle … Is it so very hard to imagine a future, and a future technology, in which it was a war crime for the human, rather than the robot, to decide to fire the weapon?  

So now, a challenge to our readers … can you come up with a script scenario for any Star Trek show in which someone is being tried for war crimes for having decided to shoot, rather than letting the machine do it?  If anyone wants to offer something, put it in a paragraph or so in the comments and if we get any, I’ll let our resident OJ TV writer, Kevin, judge them.

On Pre-Crimes and Panopticons

by Chris Borgen

Going forward I need to remember that if I’m ever looking for a quick topic about which to blog, I just need to take a look at the latest developments from the UK on surveillance. First there was using ubiquitous surveillance to make art.  Now there’s surveillance imitating art… specifically The Minority Report, a short story by Philip K. Dick (and subsequently a film). As the Daily Mail explains:

CCTV [closed-circuit TV] cameras which can ‘predict’ if a crime is about to take place are being introduced on Britain’s streets.

The cameras can alert operators to suspicious behaviour, such as loitering and unusually slow walking. Anyone spotted could then have to explain their behaviour to a police officer.

The move has been compared to the Tom Cruise science-fiction film Minority Report, in which people are arrested before they commit planned offences.

(A hat tip to Futurismic for spotting this article.)

Further on, the article states:

Computers are programmed to analyse the movements of people or vehicles in the camera frame. If someone is seen lurking in a particular area, the computer will send out an alarm to a CCTV operator.

The operator will then check the image and – if concerned – ring the police. The aim is to stop crimes before they are committed. If a vehicle is moving too fast or slow – indicating joyriding or kerb-crawling, for example – a similar alert could be given.

Councillor Jason Fazackarley of Portsmouth Council said: ‘It’s the 21st century equivalent of a nightwatchman, but unlike a night-watchman it never blinks, it never takes a break and it never gets bored.’

Of course this is supposed to be reassuring and, as one commentor put it, she would not mind such a system if she was surrounded by a hostile street gang. She assumed that if she was out with a bunch of her female friends, she would not be similarly targeted. The underlying issue, of course, is which activities or groups look suspicious.

The main question, though, is whether or not this is inching toward a panopticon society…