Proportionality and Autonomous Weapons Systems

Proportionality and Autonomous Weapons Systems

[Jeroen van den Boogaard is assistant professor military law of the Netherlands Defence Academy and a lecturer and associate researcher at the Amsterdam Center of International Law.]

Despite Chris Borgen’s plea that “the immediate legal issues may have to do more with international business transactions than international humanitarian law”, the International Committee of the Red Cross (ICRC) hosted their second expert meeting on autonomous weapons systems last week. The meeting brought together a number of legal and technical experts on the subject as well as governmental representatives (the Report of the first expert meeting in 2014 is here). Autonomous weapons systems, or ‘killer robots’ as they are referred to by others, are sophisticated weapons systems that, once they have been activated, can select and attack targets without further human intervention.

The focus of the ICRC in their definition of autonomous weapons systems (AWS) is on systems with a high degree of autonomy in their ‘critical functions’, namely autonomously selecting and attacking targets. The ICRC has in the past called on States to ensure that AWS are not employed if compliance with international humanitarian law (IHL) cannot be guaranteed. The Campaign to stop Killer Robots have called for a pre-emptive and comprehensive ban on AWS and to prohibit taking the human ‘out-of-the-loop’ with respect to targeting and attack decisions on the battlefield.

It is important to realise that professional militaries around the globe already possess and use scores of weapon systems with varying levels of autonomy. The use of artificial intelligence of future AWS may however enable AWS to learn from earlier operations, which enhances their effectiveness. It is feared that this will lead to scenarios where AWS go astray and decide in an unpredictable way which targets to attack.

The concerns for the use of AWS are based on a number of grounds, for example the moral question whether decisions with regard to life or death can be left to machines. Another concern is the fear that the protection of civilians during armed conflict would be adversely affected through the use of AWS. In legal terms, this means that it is unclear whether AWS are in compliance with IHL, particularly the principles of distinction, proportionality and precautionary measures.

The main focus of the ICRC expert meeting was to establish what may be understood by retaining ‘adequate, meaningful, or, appropriate human control over the use of force’ by AWS. This is important because although there is by definition always a human actor who deploys the AWS, the question is what the consequences are in case the AWS is fully independently making decisions as required by IHL. For example, it is unclear whether AWS would be able to comply with the obligation to verify whether its target is a legitimate military objective.

It seems that in technical terms, it may be expected that the use of complex algorithms may enable AWS to reliably identify the military advantage of attacking a certain target. Recent history has revealed the exponential speed of developments in computers, data storage, and communications systems. There is no reason to assume that this would be any different for the development of self-adapting AWS whose algorithms rely on artificial intelligence to independently assess what the destruction of a certain military objective would contribute to the military advantage of an operation. This is necessary to attack an object in compliance with IHL. Especially in environments without any civilian presence, such as below the sea on the high seas, IHL seems to be no obstacle to deploy AWS.

The picture changes as soon as civilian casualties or damage to civilian objects may be expected. In this context, the proportionality principle in IHL is usually mentioned as a major obstacle against AWS complying with IHL. This rule prohibits “launching an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated, is prohibited.” It is the last check that needs to be conducted by a military commander before a planned operation is executed. In addition, when an attack is already ongoing, the commander is under the obligation to continue to monitor whether the attack remains within the restraints provided by the proportionality rule.

The principle of proportionality is a particularly difficult component of the law of targeting. Many factors need to be assessed in the proportionality equation. These include the question of how the concepts of military advantage, military objectives, civilians, and civilian objects should be defined, as well as the weight that is to be attributed to the various factors. Unfortunately, the exact legal meaning of these factors remains unsettled and subject to heated scholarly debate. My forthcoming article in the Journal of International Humanitarian Legal Studies addresses a number difficulties with regard to proportionality and AWS. It seems that particularly if the AWS operate in a complex and quickly changing environment, AWS lack the holistic ability of the human mind to take the complete context into account in order to comply with the principle of proportionality.

One factor that is often overlooked as in the proportionality assessment is that of the level of command. AWS operate on a tactical level. It is however considered important by many States that the expected military advantage must be considered from the attack as a whole, and not just from one isolated attack. It is the author’s opinion that the military advantage as well as the expected collateral damage must be estimated on all levels, thus on the strategic, operational, and tactical levels.

AWS may be seen as part of a larger system that works as a network and that needs to take proportionality calculations on the level of its higher headquarters into account in their own assessment. This means that there has to be a constant flow of data, updating the AWS on the latest developments on each level. This would then presumably enable the AWS to calculate the expected military advantage and the expected collateral damage on the tactical level, in quickly changing circumstances. Of course, the challenge here is that these networks need to be protected from jamming or hacking by the adversary. In fact, concerns for the vulnerability of these networks from enemy interference are one of the drivers to develop AWS in the first place.

A continuous link between tactical ASW and the operational and strategic headquarters enable updating the complete situation on each level. This should lead to well-informed proportionality calculations on all different levels. Equally, the operational plans in the higher headquarters can be adapted based on the circumstances the AWS observes. Thus, if an attack by one AWS would achieve the military advantage that was desired by the larger operation, this would lead to a decrease of the military advantage that other parts of that operation were intended to achieve. Assuming AWS would be making proportionality assessments on the tactical level, this would lead to the situation in which it is less likely that a disproportionate attack will be planned on an operational level. In this situation, this would possibly lead to less collateral damage and thus be beneficial for the protection of the civilian population.

Consider however a scenario in which a ‘swarm’ of autonomous armed drones is simultaneously engaging moving military targets in a city. Assume that the damage to civilian structures and the harm to the civilian population caused during each individual attack is proportionate. However, the total civilian damage may be expected to be excessive in the event that the combined attacks would cause fires and cut off all escape routes for civilians. It seems that in the event that multiple AWS are deployed together, as in the example of ‘swarms’ of autonomous drones, the collateral damage that the individual drones cause as individual systems must also be calculated collectively. Thus, an operational level proportionality calculation must also be conducted. This operational proportionality assessment would be based partly on the combined proportionality calculations on a tactical level, as well as other considerations dictating the military advantage on the operational level. It is my opinion that autonomous weapons systems must be prevented from making decisions on an operational or strategic level without the input of the relevant, human military commander because at that level, the equation must also include other factors to determine the military advantage sought, including both strategic and political factors.

The conclusion is that, at least for the time being, human beings must remain ‘in the loop’ when AWS are deployed. The main reason is that the proportionality rule has many qualitative and subjective elements and is extremely time and context-dependent. Presently, it cannot be expected that AWS are able to adapt a similar holistic analysis compared to a human being’s to ascertain that an attack by AWS will remain within the boundaries of IHL. In addition, AWS must be prevented from making decisions on an operational or strategic level without the input of the relevant military commander. The reason is that at those levels, the proportionality equation must also include other factors to determine the military advantage sought, such as strategic and political factors.

Print Friendly, PDF & Email
Topics
Featured
Notify of
El roam
El roam

Thanks for the post , just two reservations , with your permission : First, you insist on the gap, between robots and human being, in exercising complex discretion (tactical, and strategic one) and indeed so!! Yet , if so , the very basic or fundamental definition of robots , as been brought in the post : ” Autonomous weapons systems, or ‘killer robots’ as they are referred to by others, are sophisticated weapons systems that, once they have been activated, can select and attack targets without further human intervention.” Is flawed as such, since, selecting targets, and attacking them, must comprise, all those discretions, brought in the post: distinction, proportionality and precautionary. In such : The right definition, is a machine, which comes at the shoes of human being, in exercising discretion and fighting , at the battlefield , since , Selecting and targeting , can be done , without such measurement described . Second : The post lacks , the issue of liability . What happens, legally, if autonomous robots, makes mistakes in battlefield. Who stands trial for international crimes ?? The operator ?? the designer ?? Surly not the robot !! Just for that reason, even if the… Read more »