John Pike on “Stone-Cold Robot Killers on the Battlefield” in the Washington Post
John Pike, of GlobalSecurity.org website, has a provocative op-ed in today’s Washington Post (January 4, 2009, B3) arguing that the evolution of battlefield robots might mean robots as the soldiers that do the killing on future battlefields:
Within a decade, the Army will field armed robots with intellects that possess, as H.G. Wells put it, “minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic.”
Let us dwell on “unsympathetic.” These killers will be utterly without remorse or pity when confronting the enemy. That’s something new. In 1947, military historian S.L.A. Marshall published “Men Against Fire,” which documented the fundamental difference between real soldiers and movie soldiers: Most real soldiers will not shoot at the enemy. Most won’t even discharge their weapons, and most of the rest do no more than spray bullets in the enemy’s general direction. These findings remain controversial, but the hundreds of thousands of bullets expended in Iraq for every enemy combatant killed suggests that it’s not too far off the mark.
Only a few troops, perhaps 1 percent, will actually direct aimed fire at the enemy with the intent to kill. These troops are treasured, and set apart, and called snipers.
Armed robots will all be snipers. Stone-cold killers, every one of them. They will aim with inhuman precision and fire without human hesitation. They will not need bonuses to enlist or housing for their families or expensive training ranges or retirement payments. Commanders will order them onto battlefields that would mean certain death for humans, knowing that the worst to come is a trip to the shop for repairs. The writing of condolence letters would become a lost art.
For a lot of reasons, I don’t think this is where the evolution of battlefield robots will go; you can check out some of my earlier posts on this blog. To start with, the “within a decade” prediction, at least as far as the robot snipers that Pike imagines, is simply not imaginable. It will be longer than that before we reach the point, if we ever do, of genuinely autonomous sniper robots replacing humans as the trigger finger. In any case, so far as I can discern, if we ever get to autonomously weapons-firing robots, it will be after a period – one that might never end – of battlefield robots that replace human soldiers in the myriad non-weapons-firing roles, such as the delivery of supplies and ammunition, etc, for the express purpose of reducing human exposure down precisely to the people who do the shooting.
I’m not opposed in absolute principle to robots on the battlefield that might eventually make autonomous firing decisions. It is a question of what the technology of the future is able to do and not do. Just as we are trending toward medical diagnostic technology that might eventually make better and faster decisions about treatment options than human doctors, it is not impossible that we might eventually come up with technology that, exactly because it is not prey to human emotions, biases, stresses, etc., might do a better job in deciding to hit a target than a human would, at least on some suitable average run of cases. It is easy to imagine a world of tomorrow in which it would be considered mad, malpractice, and simply culturally unacceptable that a doctor would not be directed in diagnostics by a far superior machine; it is easy to imagine a world of tomorrow in which people would look back and think it mad and bad that we ever trusted humans rather than emotionless, stressless robots to make firing decisions on the battlefield.
Or maybe not. I don’t know what technology will be able to do; I’m not ruling the possibility out in advance, or ruling it out of bounds on some a priori principle that only a human can pull the trigger. It seems to me a long way before one reaches the actual threshold of such judgments, however. In the meantime, I think Pike is wrong in suggesting that the value of battlefield robots will be – or is understood by US military planners today – to be about stone-cold robot killers. It is, rather, today all about finding ways to replace everyone but the human shooters.
Is it so very hard to imagine a future, and a future technology, in which it was a war crime for the human, rather than the robot, to decide to fire the weapon?