27 Mar PW Singer’s Wired for War Discussion at CTLab
Complex Terrain Laboratory, where several OJ people sometimes participate, is hosting an online discussion next week on PW Singer’s new book on robotics and war, Wired for War. We have mentioned this book in the past, and OJ has a number of posts on battlefield robotics in the last year or so. Singer is participating in the CTLab symposium and, having read his opening post, it looks to be fascinating. It is a terrific lineup of participants.
That said, let me comment on why robotics is important to discussions here at Opinio Juris. Many of my posts about robotics have been about the issue of autonomy – the far down the road, still sci-fi-ish issue of whether and when robotic systems will be able to make their own decisions about firing weapons on the battlefield. As Singer’s work points out, that day is not as far down the road as one might think, and obviously it raises many issues about the laws of war and their application.
As I’ve also pointed out on OJ – but I don’t think has been sufficiently absorbed, though perhaps I am wrong, by the Pentagon – the most likely scenario is one in which other militaries, likely China, deploy systems that have autonomy to identify a target (e.g., fire coming from somewhere) and shoot back, but without meeting standards for assessing likely collateral damage and proportionality (i.e., identify fire, and fire back, period) that the US would consider important to have in place before deploying an autonomous robotic firing system. In other words, the US is likely, in my view, to face inadequately controlled autonomous weapons systems before it believes it has a system of its own that it believes meet IHL standards. And that means, in turn, that the US might easily be in the position of having to develop counter-technologies against autonomous but inadequately controlled robotics systems before it even deploys its own. The counter before the US’s actual system.
I have also suggested on this blog that prejudging the technology of autonomy decades in advance is a mistake. Deciding today that an autonomous robotic system is per se illegal because a human must be in the real time loop is a mistake. We do not know today how the technological possibilities will turn out. In medical technology, it might be that fifty years from now, it will be malpractice for a surgeon to do certain operations by hand or for a diagnostician not to follow the computer diagnosis without some very high standard to be met. The same might well be true in robotics on the battlefield; it is not inconceivable that a hundred years from now, having a human make certain firing decisions, rather than an emotionless computer might be regarded as a possible war crime. It is a profound mistake to pre-judge the humanitarian possibilities created in the future by technology.
That is about autonomy, but it is still well down the road. The more immediate reason why robotics is important to OJ discussions is that it is not just about sci-fi autonomous robots. It is about other issues that are taking place on the battlefield now, and transforming it in all the ways that Singer discusses so well. I have been working up a book proposal on issues of ethics and laws of war in robotics. Strikingly (it took me by surprise), most of the book will not be about autonomy issues in ethics and law of war, but about far more immediate issues. (If you are a publisher out there reading this and would like to see the proposal, please feel free to contact me. Adv.)
The main issues?:
- Stand-off, remote targeting in real time – i.e., Predator strikes – robotics in the sense of drones controlled off-battlefield by humans. But Predators are just the beginning, because missiles are too much firepower – needed instead are genuinely personal weapons – a little flying thingie that is remote piloted up to the head of the Taleban commander and then blown up. Not a Predator missile, but a single person kill. The issue of robotics is the issue of targeted killing. What makes it special is that it is simultaneously discrete and remote: in the past, discrimination usually meant a person getting as close as possible for a kill, but the promise of robotics in targeted killing is targeting discrimination by a non-human, stand-off platform.
- Surveillance, when you have thousands of small, insect-like, flying gadgets gathering information everywhere, identifying targets, gathering intelligence on the battlefield and off, and feeding information for precisely targeted, discrete targeted killing, using drones, but outfitted with weapons once again aimed at single individuals
- The calculus of making war when you put few humans on the battlefield. Does it make war easier or harder to undertake? Does it matter?
- Autonomous weapons firing systems in the future.
- Counters to autonomous weapons firing systems – counters which might have to be created and deployed before your own autonomous firing system is in the field.
Robotics, in the stand-off, remote-targeting sense, the Predator sense, is fully at the heart of yesterday’s Wall Street Journal story on the Obama administration’s review of the Pakistan Predator campaign. The Obama administration likes that campaign – muscle without boots on the ground. It’s efficient and effective, in its view, and I entirely agree. And note a couple of things in that story:
U.S. and Pakistani intelligence officials are drawing up a fresh list of terrorist targets for Predator drone strikes along the Pakistan-Afghanistan border, part of a U.S. review of the drone program, according to officials involved.
Pakistani officials are seeking to broaden the scope of the program to target extremists who have carried out attacks against Pakistanis, a move they say could win domestic support. The Obama administration is weighing the effectiveness of the program against the risk that its unpopularity weakens an important ally.
This is the first time I have seen in print (though I have been informally told) that Pakistani officials want the US to broaden the campaign beyond AQ and Taleban. Will there ever be a legal issue of whether such widening of targets goes beyond the AUMF? Probably not, at least not in the current Democratic president-Democratic Congress, but it is a question. President Obama campaigned on increased use of targeted killing via drones, and his administration thinks this has been an effective policy:
President Barack Obama concluded that the drones have been an effective weapon against al Qaeda since President George W. Bush accelerated the missile strikes last year. U.S. officials have seen evidence of disruption as militants devote more time to operational security, choose to sleep in orchards instead of buildings, and take more care about the people with whom they interact, said a person familiar with the evidence.
Already, the campaign has apparently stepped up attacks on the network of Pakistani Taliban leader Baitullah Mehsud, who is believed to be behind the 2007 assassination of former Prime Minister Benazir Bhutto, who was Mr. Zardari’s wife. In the fourth of a series of recent attacks targeting Mr. Mehsud’s network, a drone attack Wednesday killed at least eight militants along the Pakistan-Afghan border, according to two Pakistani officials.
The intensified campaign could help win domestic support for the strikes because it shows that the drone attacks are targeting direct threats to Pakistan, said a Pakistani official.
There is a discussion about whether to expand the strikes to outside Pakistan’s tribal areas, such as the province of Baluchistan. U.S. intelligence officials say they believe many of the Taliban’s senior leaders, such as Mullah Omar, operate openly in the provincial capital of Quetta. The idea of going that far has prompted concern in Islamabad that such strikes will greatly increase the numbers of civilian casualties and further fuel unrest.
Indeed, my own view is that the US will conclude generally that targeted killing using stand-off, remote controlled platforms is one of the best new tools in its arsenal, especially if, over time, the technology can move to ever more controlled, discrete killing, to get from the level of a missile to something that can kill a single individual and yet not require any human agent – commando or CIA agent – on the ground. I myself think that these developments are among the best in decades for improving discrimination in targeting and thus improving humanitarian performance in war and counterterrorism. There are lots of people in the world who disagree, however.
But either way, robotics is firmly at the center of targeted killing by the US, and targeted killing is at the center of evolving US counterterrorism strategy.
(All this has been on my mind in drafting a chapter for a book that Ben Wittes is editing on legislation for counterterrorism. Maybe I’ll post up some stuff from that draft over at my Ken-blog. In any case, check out the CTLab symposium next week.)