Lethal autonomous weapons can be approached from two directions. One is to look from the front-end - starting from where technology stands today, forward across the evolution of the technology, but focused on the incremental changes as and how they occur, and especially how they are occurring
now. The other is to imagine the
end-state - the necessarily speculative and sometimes pure sci-fi "robot soldiers" of this post's title - and look backwards to the present. Starting with the hypothetical technological end-point - a genuinely "autonomous," decision-making robot weapon, rather than merely a highly "automated" one - the basic regulatory issue is, what tests of law and ethics would an autonomous weapon have to pass in order to be a lawful system, starting with the fundamental law of war principles, distinction and proportionality? What would such a weapon be and how would it have to operate to satisfy those tests?
This is an important conceptual exercise as technological innovators imagine and work toward autonomy in many different robotic applications, in which weapons technology is only one line of inquiry. Imagining the technological end-point as law and ethics means, more or less, hypothesizing what we might call the "ethical Turing Test" for a robot soldier: What must it be able to do, and how must it be able to behave, in order to make it indistinguishable for its morally ideal human counterpart? The idealized conceptualization of the ethically defensible autonomous weapon forces us to ask questions today about fundamental issues - who or what is accountable, for example, or how does one turn proportionality judgments into an algorithm? Might a system in which lethal decisions are made entirely by machine, with no human in the firing loop, violate some fundamental moral principle? All these and more are important questions. The problem in starting with them, however, is that the technology driving toward autonomous weapons is proceeding in little tiny steps (and some important critics, their enthusiasm tempered by earlier promises of artificial intelligence that failed, question whether the tiny little steps can ever get to genuine autonomy) - not gigantic ones that immediately implicate these fundamental questions of full autonomy.
Indeed, the systems being automated first are frequently not the weapons themselves, but instead other parts of the system. But they might eventually carry the weapons in train. Thus, for example, as fighter aircraft become increasingly automated in how they are flown - in order to compete with enemy aircraft also becoming more automated - eventually important parts of the flight functions operate faster than humans can. In that case, however, it looks irresistible to automate, if not make fully autonomous, the weapons systems, because they have to be integrated with the whole aircraft and all its systems. We didn't start out intending to automate the weapons - but we wound up there because the weapons are part of a whole aircraft system.
These facts about how technology of automation is evolving are important for questions of regulating and assessing the legality of new weapons systems. In effect, they shift the focus away from imagining the fully autonomous robot soldier and the legal and ethical tests it would have to meet to be lawful - back to the front end, the margin of evolving technology today. The bit-by-bit evolution of the technology urges a gradualist approach to regulation; incremental advances in automation of systems that have implications for weapons need to be considered from a regulatory standpoint that is itself gradualist and able to adapt to incremental innovation. So, Matthew Waxman and I are pleased to announce a new short paper on this topic,
Law and Ethics for Robot Soldiers, which takes as its premise the need to think incrementally about the regulation of evolving automation.
The essay's takeaway on regulation is ultimately a modest one - a quite traditional (at least from the US government's long-term perspective) approach to weapons regulation. Grand treaties seem to us unlikely to be suitable to incremental technological change, particularly as they might seek to imagine a technological end-state that might come about as anticipated, but might develop in some quite unexpected way. Sweeping and categorical pronouncements can re-state fundamental principles of the laws of war, but they are unlikely to be very useful in addressing the highly specific and contingent facts of particular systems undergoing automation.
We urge, instead, a gradually evolving pattern of practices of the states developing such systems and, as part of the process of legal review of weapons systems, development through reasoned articulation of how and why highly particular, technically detailed weapons systems meet fundamental legal standards. In effect, this proposes that states develop bodies of evolving state practice - sometimes agreeing with other states and their practices, but likely other times disagreeing. This seems to us the most suitable means for developing legal standards for the long term to address evolving weapons technology. Abstract below the fold.