20 Aug Battlefield Robot Target Identification Competition
I have this gnawing suspicion that the only two law professors deeply interested in battlefield robotics are Glenn Reynolds and me. Nonetheless, when it comes to battlefield bots and the law, you can take satisfaction that you will have Heard It Here First, unless, of course, you read Instapundit.
As I’ve said in earlier posts on this subject (and here and here and here), the vast, vast majority of the research into battlefield robots has nothing to do with autonomous weapons firing platforms – which is, of course, where the biggest ethical and legal issues arise – but with surveillance and independent target scanning and identification. But there are some other possible roles for robotics on the battlefield, including things like extraction of the wounded or delivery of supplies. A lot of the interest is less about autonomous battlefield robots as such than the multiple uses of unmanned vehicles on the battlefield. So, check out this article from (where else?) Popular Mechanics, which describes the recent UK MoD competition for mobile battlefield robot platforms (HT Instapundit):
The United Kingdom’s Ministry of Defence (MoD) has held its own robotics competition, the Grand Challenge, that cut to the chase with unmanned vehicles stalking human targets through the Copehill Down training village in southwestern England. The finals took place this weekend, and the MoD announced the winners yesterday.
A key difference between the Grand Challenge and DARPA’s Challenges is hardware diversity. The robots who slogged through the training village, picking out an array of potential targets—including uniformed troops, armed snipers perched in windows and roadside bombs—ranged from familiar, sensor-studded unmanned ground vehicles (UGVs) to swarms of unmanned aerial vehicles (UAVs). Some teams even used a combination of ground and air bots, since UAVs might be useful for spotting a tactical (a pickup with a mounted weapon) while UGVs are better at detecting improvised bombs. Less “operator intervention” required to navigate the village, find warm bodies and differentiate between civilians and legitimate military targets earned more points.
This UK competition is being contrasted, in the grafs above, to the US DARPA competition, which was more focused on unmanned vehicles.
The immediate battlefield robotics interest in the West is on surveillance and unmanned vehicles to take over roles now played by humans in logistics and such, not autonomous firing platforms. Indeed, one great policy risk in the development of these platforms is that a military not so concerned about ethical and legal requirements might deploy autonomous firing robots several generations long before NATO might think they were acceptable for use – if NATO countries ever concluded that they were. The battlefield environment suddenly has many more “shooters” on it, possibly firing at NATO troops, and not especially concerned about collateral damage that might well have prevented NATO armies from deploying the robots in that current technological generation. I suspect, in other words, that robots with autonomous firing capability will come to the battlefield sooner rather than later – but not necessarily in a form acceptable to the US and NATO on legal and ethical grounds, and not deployed by the US and NATO, either.
Here, by the way, is a useful link sheet from a class on Robots and Society at Georgia Tech.