[Jeroen van den Boogaard is assistant professor military law of the Netherlands Defence Academy and a lecturer and associate researcher at the Amsterdam Center of International Law.]
Despite Chris Borgen's
plea that “the immediate legal issues may have to do more with international business transactions than international humanitarian law”, the International Committee of the Red Cross (ICRC) hosted their second expert meeting on autonomous weapons systems last week. The meeting brought together a number of legal and technical experts on the subject as well as governmental representatives (the Report of the first expert meeting in 2014 is here). Autonomous weapons systems, or ‘killer robots’ as they are referred to by
others, are sophisticated weapons systems that, once they have been activated, can select and attack targets without further human intervention.
The focus of the ICRC in their definition of autonomous weapons systems (AWS) is on systems with a high degree of autonomy in their ‘critical functions’, namely autonomously selecting and attacking targets. The ICRC has in the past called on States to ensure that AWS are not employed if compliance with international humanitarian law (IHL) cannot be guaranteed. The Campaign to stop Killer Robots have
called for a pre-emptive and comprehensive ban on AWS and to prohibit taking the human ‘out-of-the-loop’ with respect to targeting and attack decisions on the battlefield.
It is important to realise that professional militaries around the globe already possess and use scores of weapon systems with varying levels of autonomy. The use of artificial intelligence of future AWS may however enable AWS to learn from earlier operations, which enhances their effectiveness. It is feared that this will lead to scenarios where AWS go astray and decide in an unpredictable way which targets to attack.
The concerns for the use of AWS are based on a number of grounds, for example the moral question whether decisions with regard to life or death can be left to machines. Another concern is the fear that the protection of civilians during armed conflict would be adversely affected through the use of AWS. In legal terms, this means that it is unclear whether AWS are in compliance with IHL, particularly the principles of distinction, proportionality and precautionary measures.
The main focus of the ICRC expert meeting was to establish what may be understood by retaining ‘adequate, meaningful, or, appropriate human control over the use of force’ by AWS. This is important because although there is by definition always a human actor who deploys the AWS, the question is what the consequences are in case the AWS is fully independently making decisions as required by IHL. For example, it is unclear whether AWS would be able to comply with the
obligation to verify whether its target is a legitimate military objective.
It seems that in technical terms, it may be expected that the use of complex algorithms may enable AWS to reliably identify the military advantage of attacking a certain target. Recent history has revealed the exponential speed of developments in computers, data storage, and communications systems. There is no reason to assume that this would be any different for the development of self-adapting AWS whose algorithms rely on artificial intelligence to independently assess what the destruction of a certain military objective would contribute to the military advantage of an operation. This is necessary to attack an object in compliance with IHL. Especially in environments without any civilian presence, such as below the sea on the high seas, IHL seems to be no obstacle to deploy AWS.
The picture changes as soon as