AI and Machine Learning Symposium: Command in the Age of Autonomy–Unanswered Questions for Military Operations

AI and Machine Learning Symposium: Command in the Age of Autonomy–Unanswered Questions for Military Operations

[Eve Massingham, Simon McKenzie and Rain Liivoja are members of the Law and the Future of War Research Group at the University of Queensland Law School. The Research Group receives funding from the Australian Government through the Defence Cooperative Research Centre for Trusted Autonomous Systems. The views and opinions expressed in the article are those of the authors, and do not necessarily reflect the views of the Australian Government or any other institution. This post is part of our symposium on legal, operational, and ethical questions on the use of AI and machine learning in armed conflict.]

The ICRC’s 2019 report on International Humanitarian Law and the Challenges of Contemporary Armed Conflict acknowledges (at 31) that AI and machine learning are playing an increasing role in military hardware across all domains. The ICRC is primarily concerned about autonomous weapon systems and any potential for ‘automatic target recognition’. But the ICRC’s proposal (at 32) of a ‘human‐centred, and humanity‐centred, approach to the use of these technologies in armed conflict’ to ‘preserve human control’ has the potential to shape how these technologies interact with a range of international legal regimes. It forces us to confront an important conceptual issue: what is meant by ‘control’, and how does it relate to the military and legal concept of ‘command’? The answer to this question is key to understanding the law as it applies to the deployment and use of AI-enabled and autonomous platforms.

Command is central to military operations. International law acknowledges and reflects this key concept in a number of important ways. For example, members of irregular armed groups can only be combatants, with the entitlement to prisoner of war status, when they are, inter alia, ‘commanded by a person responsible for his subordinates’ (Article 4(A)(2)(a) of the 1949 Geneva Convention III). The significance of command is also reflected in the international legal definitions of warships and military aircraft. Warships must be, among other things, ‘under the command of an officer duly commissioned by the government of the State’. This requirement can be found in Article 29 of the 1982 United Nations Convention on the Law of the Sea. For military aircraft, which are not specifically regulated by treaty, a similar condition derives from customary law, as codified, for example, in the 1923 Hague Rules of Aerial Warfare. Article XIV of these rules stipulates that a military aircraft must be ‘under the command of a person duly commissioned or enlisted in the military service of the State’. In short, a member of the armed forces must be in command of a warship or a military aircraft, and in case of warships that person must be a commissioned officer. The relationship between this longstanding legal requirement and the approach taken by States to increasing autonomy in military platforms is opaque.

No clear limit to the level of autonomy?

As the technologies used in military platforms have become increasingly sophisticated, some militaries – such as the Australian, New Zealand and Danish defence forces – have specifically included not only remote-controlled platforms in their definitions of military aircraft but also systems that they variously describe as being ‘preprogrammed’ or ‘automated’. A preprogrammed or automated device has some capacity for autonomous operation – the ability to perform certain functions without real-time intervention from a human operator. Yet the technologically inclusive domestic conceptualisations of military aircraft do not place any apparent limit to the level of autonomy these craft may have and still be regarded as being under command. This suggests that even a very low level of human oversight may be sufficient for such craft to exercise the belligerent rights of military aircraft. Whether States intended such an outcome is unclear. In any event, the question remains as to whether any degree of autonomous functionality would be consistent with the legal requirement of being under the command of a member of the armed forces. 

Some States have attempted to provide reassurance that there will be appropriate human oversight. The Australian Defence Force (at 18) notes that ‘command is a fundamentally human function that cannot be conducted by machines’. The United Kingdom (at 42) says the use of armed force ‘will always be under human control as an absolute guarantee of human oversight and authority and of accountability’. However, it is unclear how these commitments interact with the requirement of command and, in the case of the United Kingdom, whether this commitment applies to military systems that have autonomous functions other than for the use of force, such as autonomous navigation.

Many international legal experts also seem to conclude that automated or preprogrammed platforms can be classified as military aircraft. For example, Rule 1(x) of the 2013 HPCR Manual on International Law Applicable to Air and Missile Warfare (HPCR Manual) requires military aircraft to be, inter alia, ‘controlled, manned or preprogrammed by a crew subject to regular armed forces discipline’. The Commentary to the HPCR Manual (at 38-39) reiterates this point, emphasising that for autonomously operating aerial devices to qualify as military aircraft the preprogramming must have been executed by individuals subject to regular armed forces control. While the HPCR Manual does not directly bind States, it was developed through an extensive expert process and amounts to a recent articulation of an area of law not otherwise well covered.

It might be thought that an expansion of an accepted international law definition would be explained. However, military doctrine documents and the Commentary to the HPCR Manual do not discuss in any detail the justification for the inclusion of automated or preprogrammed craft in the category of military aircraft. They do not discuss what such an inclusion means for the maintenance of command and the practical exercise of the various rights and obligations of military craft.  It is high time to have that conversation. The rationale for – and limits to – considering craft with potentially highly autonomous functions to be legally indistinguishable from other military aircraft needs to be unpacked in order to evaluate the adequacy of the existing legal framework.

Moreover, the HPCR Manual’s approach can also influence the application of the law in other domains of warfare. For example, Andrew Norris (at 30) has observed that the HPCR Manual’s approach might also lead to an uncrewed maritime vehicle being classified as a warship. To date, States do not seem to have taken this step. The definition of a ‘warship’ in publicly available naval doctrine has not been adjusted to specifically include automated or preprogrammed platforms.

Command as a crucial part of ensuring accountability

The inclusion of command in definitions of military aircraft and warships can help ensure accountability for the use of these devices. Command may well be a broad concept encompassing a range of responsibilities, but ultimately, as Lieutenant General Ray Crabbe (at 11) observes, it “entails the authority for direction and issuing orders … it also includes responsibility and accountability for the orders issued – and … for orders not issued”.

The link between command and accountability has not gone unnoticed in the discussions and debate on autonomous weapons systems. For example, it is being considered as part of the current multilateral discussions on autonomous weapons taking place within the Group of Governmental Experts (GGE) convened under the Convention on Certain Conventional Weapons(CCW). However, the GGE has been preoccupied with the more technical notion of ‘control’ over the selection and engagement of targets. It has not delved into the relationship between command and control.

Command does not necessarily require a physical presence of the commander on the craft. Indeed, higher levels of a chain of command will almost always be remote from many of the units executing their orders whilst retaining command of them.  However, some connection between military command and the actions of person or object exercising belligerent rights clearly must be maintained. Belligerent rights allow actions in wartime that would not be permitted in peacetime – most importantly the right to engage in hostilities. These are rights which cannot – and must not – be outsourced. The exercise of these rights must remain linked to military command.

The command link in relation to remote-controlled craft poses no great difficulty: while the operator of the craft might be geographically distant, its movements and actions are nevertheless controlled by that operator. The same arguably applies to uncrewed craft which have been programmed by personnel under military discipline and which only operate according to that program. The behaviour of such craft is linked to, and limited by, the comprehensive instructions it has been given. However, AI-based systems may permit learning from the environment and thus facilitate higher levels of autonomous operation. Would it be sufficient, for the purposes of maintaining command, that the learning process was supervised by a member of the armed forces? Whatever the answer, the legal requirement of command may set limits to use of autonomy in warships and military aircraft. When autonomous functionality is so extensive that the craft is no longer commanded by a member of the armed forces, then the craft ceases to be a warship or military aircraft.

Developing appropriate legal standards

In the absence of a proper debate on the requirements of command, the legal framework for autonomous military craft may be established by the practice of those States that are already deploying autonomous platforms. The broader international community should consider how the traditional requirement of command can be implemented in light of autonomy. This is significant not just for maintaining accountability, but also for ensuring compliance with the legal framework protecting those not, or no longer, participating in the hostilities.

There may be a point at which a device has such a high degree of autonomy that it (in some way) transcends command. But what would this look like? Many States appear to have already accepted that it will not be problematic to have a remote pilot, or even a preprogrammed system, the same may not be true of a self-learning one. As the HPCR Manual Commentary to its Rule 32(a) notes, ‘[i]n order to exercise constant care, a Belligerent Party ought to retain a command and control system capable of collecting, processing relevant information, making the necessary evaluation and directing its combat units accordingly’ (para. 5). The question is, what does this look like for an autonomous system, and what is necessary for it to remain under military command?

A more comprehensive exploration and examination of the role (and limits) of the concept of command would help to understand the contours of the current debate on the regulation of autonomous systems used by militaries, and potentially assist in articulating an appropriate standard for the regulation of autonomous warships and military aircraft.  States should provide an answer to the question of when a device with a high degree of autonomy can be treated as remaining under military command. Indeed, when it comes to functions of military platforms that do not entail targeting, an articulation of the requirements of command may be more productive than an exploration of the modalities of control, which currently characterises the ongoing debate about autonomous weapons.

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, International Humanitarian Law, Organizations, Public International Law, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.