International Law and Literature: Peter Watts’ “Malak”

International Law and Literature: Peter Watts’ “Malak”

Following on Ken’s most recent post on autonomous battlefield robots, I came across the short story Malak by Peter Watts (you can read it here). What jumped out at me was a short story that beginning with epigrams such as these:

“An ethically-infallible machine ought not to be the goal. Our goal should be to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behavior of war crimes.”

–Lin et al, 2008: Autonomous Military Robotics: Risk, Ethics, and Design

“[Collateral] damage is not unlawful so long as it is not excessive in light of the overall military advantage anticipated from the attack.”

–US Department of Defense, 2009

So, yes, a short story that touches on the legal and ethical questions of using autonomous—not just unmanned—aerial combat drones. The epigrams, by the way, are to real reports. The Lin study was prepred for the U.S. Navy’s Office of Naval Research by the Ethics + Emerging Sciences Group at California Polytechnic. (It is available in .pdf here.) The definition of collateral damage can be found in various places, included the Department of Defense Dictionary of Military and Associated Terms (Available here in .pdf).

Watts is a scientist whose fiction has gained some notice for its intelligence and for grappling with unpleasant aspects of the interactions of scientific revelation, technology, and society (although he will tell you that scientists don’t necessarily make good science fiction authors). He also gained some notoriety for having been beaten and detained by TSA officers when trying to cross the U.S.-Canadian border in an incident that highlighted the legal regime for border searches in this era of counter-terrorism. (See various reports/ blog posts on this incident and its aftermath: 1, 2, 3, 4.)

But here I am interested not in Watts the defendant but Watts the writer, a writer who has used a short story to query whether ethics can be written into code. And, by the way, the short story is largely told from the point of view of the drone, Azrael (named for the angel of death) in some ambiguous future conflict, with a narrative that goes like this:

New variables demand constancy. Suddenly there is more to the world than wind speed and altitude and target acquisition, more to consider than range and firing solutions. Neutral Blue is everywhere in the equation, now. Suddenly, Blue has value.

This is unexpected. Neutrals turn Hostile sometimes, always have. Blue turns Red if it fires upon anything tagged as FRIENDLY, for example. It turns Red if it attacks its own kind (although agonistic interactions involving fewer than six Blues are classed as DOMESTIC and generally ignored)…

Azrael the drone

…has no appreciation for the legal distinction between war crime and weapons malfunction, the relative culpability of carbon and silicon, the grudging acceptance of ethical architecture and the nonnegotiable insistence of Humans in Ultimate Control.

The story unfolds with the drone trying to apply the “trivially straight forward” algebra of its rules to situations that are not so straightforward. And, of course, human programmers can override code.

I won’t spoil the ending. But I will say this: besides getting a glimpse of how the promise and the peril of autonomous drones can be one and the same, I was also reminded of what one reviewer had written about Watts: “Whenever I find my will to live becoming too strong, I read Peter Watts.”

Print Friendly, PDF & Email
Topics
Featured, Foreign Relations Law, General, International Human Rights Law, National Security Law
Notify of
Patrick S. O'Donnell

Thanks for this. I’m currently doing research and writing on the notion of “machine ethics,” a popular locution of late which I find philosophically unintelligible, in part because ethics cannot be reduced to algorithms or be part of the software of any computer program or intrinsic to the design of “artificial intelligence.” What tends to happen here, is a conflation of the meaning and understanding of the nature of certain forms of technology and (particular properties or attributes of) human beings, such that the former, in more than a metaphorical sense, are said to be capable of thinking (or possess brains, or even ‘minds’) or even some sort of “consciousness.” Of course in bringing up such issues I risk being accused of bias against silicon-based things while favoring carbon-based entities, a bias I’ll gladly confess to. Moral responsibility remains the unique prerogative of human beings.

Patrick S. O'Donnell

My interest in this was quickened by an online symposium at Concurring Opinions not long ago on Chopra and White’s book, A Legal Theory for Autonomous Artificial Agents (2011). Perhaps needless to say, I’m even bothered by the use of the notions of both “autonomy” and “agency” in this context (both in its implied moral and in its conventional legal–as in the ‘law of agency’–senses).