“Guilty Robots” in NYT Magazine Ideas 2009 Issue

“Guilty Robots” in NYT Magazine Ideas 2009 Issue

One of my favorite issues of the New York Times Magazine is its “year in ideas” issue, which comes annually in December.  Because OJ is a repository of things related to battlefield robotics and law and ethics, I wanted to flag for your attention the item by Dara Kerr, “Guilty Robots.”

[I]magine robots that obey injunctions like Immanuel Kant’s categorical imperative — acting rationally and with a sense of moral duty. This July, the roboticist Ronald Arkin of Georgia Tech finished a three-year project with the U.S. Army designing prototype software for autonomous ethical robots. He maintains that in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective.

“I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions,” he says.

The software consists of what Arkin calls “ethical architecture,” which is based on international laws of war and rules of engagement.

The “guilty” part comes from a feature of Professor Arkin’s ethical architecture, in which certain parameters cause the robot to become more “worried” about the rising calculations of collateral damage and other such factors.

After considering several moral emotions like remorse, compassion and shame, Arkin decided to focus on modeling guilt because it can be used to condemn specific behavior and generate constructive change. While fighting, his robots assess battlefield damage and then use algorithms to calculate the appropriate level of guilt. If the damage includes noncombatant casualties or harm to civilian property, for instance, their guilt level increases. As the level grows, the robots may choose weapons with less risk of collateral damage or may refuse to fight altogether.

As I have said several times on this blog, and in various talks and presentations, I am agnostic as to whether at some point in the future, robots might prove to be ethically superior to humans in making decisions about firing weapons on the battlefield.  When I say agnostic, I mean genuinely agnostic – it seems to me an open question of where technology goes, and in, say, a hundred years, who can say?  For thing, I do fully imagine that roboticized medicine, surgery and operations, will very possibly have reached the point where it might well be presumptive malpractice for the human doctor to override the machine.  It is not impossible for me to imagine – far from it – a time in which it would be a presumptive war crime for the human soldier to override the ethical decisions of the machine.

But maybe not.  Although I am strongly in favor of the kinds of research programs that Professor Arkin is undertaking, I think the ethical and legal  issues, whether the categorical rules or the proportionality rules, of warfare involve questions that humans have not managed to answer at the conceptual level.  Proportionality and what it means when seeking to weigh up radically incommensurable goods – military necessity and harm to civilians, for example – to start with.  One reason I am excited by Professor Arkin’s attempts to perform these functions in machine terms, however, is that the detailed, step by step, project forces us to think through difficult conceptual issues regarding human ethics at the granular level that we might otherwise skip over with some quick assumptions.  Programming does not allow one to do that quite so easily.

And it is open to Professor Arkin to reply to the concern that humans don’t have a fully articulated framework, even at the basic conceptual level, for the ethics of warfare: “Well, in order to develop a machine, I don’t actually have to address those questions or solve those problems.  The robot doesn’t have to have more ethical answers than you humans – it just has to be able to do as well, even with the gaps and holes.”

Many OJ readers will by now be familiar with Peter W. Singer’s widely noticed Wired for War.  But I would suggest following it up with Professor Arkin’s own new book, Governing Lethal Behavior in Autonomous Robots, particularly now that Amazon has dropped the price from $60 to $40.

I guess I should also add that this discussion is about battlefield robotics in the sense of “autonomous” firing systems – not the current robotics question of human controlled, but remote platform unmanned combat vehicles, Predators and drones.  I will try to put up a post soon noting several new papers on the targeted killing and UCV-drone issues in international law, including new papers on SSRN by Mary Ellen O’Connell, Jordan Paust, and others – I’ll try to do a roundup of recent papers on the subject (once past grading my corporate finance and IBT finals, that is).

Print Friendly, PDF & Email
Topics
International Criminal Law, International Human Rights Law, National Security Law
Notify of
trackback

[…] like civilian casualties or potential escalation, and so there’s a definite sense, as Kenneth Anderson at Opinio Juris observes,  in which the robot’s “decisions” will be capped at the level of conceptual […]

trackback

[…] * Though maybe we could forgo this whole moral debate and simply seek the advice of robots. […]