Revised Version of My Killer Robots Article

Revised Version of My Killer Robots Article

I have uploaded a revised version of my article “The Concept of ‘The Human’ in the Critique of Autonomous Weapons” to SSRN. It is not substantially different than the previous version, but I’ve made a number of changes in response to excellent feedback from Robert Sparrow, Nathan Gabriel Wood, my colleague Neil Renic, and my sometimes co-author Lena Trabucco. My thanks to them for reading the article so closely!

There is also one completely new 1,500 word subsection in the revised article. A number of readers, including some of the above, pointed out an unresolved tension in the earlier version. On the one hand, I argue that although most consequentialist critiques accept that the key issue with autonomous weapons is whether they can comply with IHL as well as human soldiers, those critiques focus almost exclusively on the limits of AWS technology, ignoring the far more serious limits on the ability of humans to make rational decisions. On the other hand, I argue that many deontological critiques of autonomous weapons are misguided because they assume that killer robots “decide” in a manner akin to humans, when in fact they are (pending artificial general intelligence) simply very sophisticated tools for carrying out the intent of their human programmers and operators. Read together, those arguments raise an important question: namely, are the human programmers and operators of autonomous weapons not subject to the same cognitive, physiological, situational, and emotional limits as human soldiers — and humans asked to exercise meaningful human control?

I should have answered that question in the original version of the article. I knew I needed to, but the article was already very long and I was weary from writing it. I have now corrected my mistake. Please read the article if you haven’t already. And if you have, feel free to read only the new section, pp. 53-58, where I explain why, due to their temporal location in the kill chain, human programmers and operators of autonomous weapons are far more likely to make rational decisions than human soldiers or humans asked to exercise meaningful human control.

My killer robots article will be published by the Harvard National Security Journal toward the end of the year. So if you’re keen to read it now, please download it from SSRN. Thoughts still welcome, as I don’t have to give HNSJ a copy for editing until May.

Print Friendly, PDF & Email
No Comments

Sorry, the comment form is closed at this time.