Lyin’, Cheatin’ Robots

Lyin’, Cheatin’ Robots

<br />

The battlefield robots we have mostly discussed here at Opinio Juris are those that are about remote control – realtime control by a human operator who is not actually in the cockpit of the Predator, for example.  We’ve talked somewhat about autonomy issues and robots – battlefield robots with programming enabling them to be able to make independent targeting or weapons firing decisions – but those applications are mostly still in sci-fi type development.

We have even talked about something that, in my view, does not get enough attention – what happens if both ‘we’ and ‘some potential enemy’ develop robotic systems for autonomous firing that we do not believe is as yet able to meet legal, moral, or policy requirements such as taking adequate account of collateral damage?  So we are not ready to deploy it, but the other party does – leaving us in the situation of having to come up with ‘counters’ to a technology that we are not yet ready to field.  One easy answer is, the counter is simple and kinetic; we’ll drop a big bomb on the robot system and end of problem.  No special technology needed.  But that answer assumes that the robotic system is a single bot and reasonably big – supposing, however, that it were a swarm of tiny airborne bots that you can’t just bomb, you have to develop whole new technological counters?  I have occasionally suggested that, so far as I can tell as an outsider, insufficient attention is paid to R&D of counters to systems that we ourselves would think still too primitive to put in the field.

Now, a new question – and one that I have not really thought through before.  In retrospect, it’s obvious, but I had not been thinking about it.  What are the implications of robotic technology on the battlefield of robots capable of developing self-evolving capacities to deceive, lie, and cheat?  What are the implications of that for battlefield robotics? (Dear OJ readers, you are not looking anywhere near as concerned with these results as you should be.  Soon the battlefield will start to resemble a Philip K. Dick novel.)

PopSci gives a report of a Swiss robotics experiment with evolving generations of robots engaged in a search-for-yummy-food task but with a twist – an in-built desire to hide and hoard the food source for oneself.  What would I suggest is the possible future connection with the battlefield?  Well, consider that on the battlefield, you or I might be considered the “food” in the sense of target, and an evolving sense of how to hide and avoid predators and all that could easily evolve out of these kinds of mutating programming experiments.

The experiment involved 1,000 robots divided into 10 different groups. Each robot had a sensor, a blue light, and its own 264-bit binary code “genome” that governed how it reacted to different stimuli. The first generation robots were programmed to turn the light on when they found the good resource, helping the other robots in the group find it.

The robots got higher marks for finding and sitting on the good resource, and negative points for hanging around the poisoned resource. The 200 highest-scoring genomes were then randomly “mated” and mutated to produce a new generation of programming. Within nine generations, the robots became excellent at finding the positive resource, and communicating with each other to direct other robots to the good resource.

However, there was a catch. A limited amount of access to the good resource meant that not every robot could benefit when it was found, and overcrowding could drive away the robot that originally found it.

After 500 generations, 60 percent of the robots had evolved to keep their light off when they found the good resource, hogging it all for themselves. Even more telling, a third of the robots evolved to actually look for the liars by developing an aversion to the light; the exact opposite of their original programming!

Print Friendly, PDF & Email
Topics
Foreign Relations Law
No Comments

Sorry, the comment form is closed at this time.