Symposium on Military AI and the Law of Armed Conflict: Bridging the Legal Gap Between Principles and Standards in Military AI – Assessing Australia’s ‘System of Control’ Approach

Symposium on Military AI and the Law of Armed Conflict: Bridging the Legal Gap Between Principles and Standards in Military AI – Assessing Australia’s ‘System of Control’ Approach

[Dr. Lauren Sanders is a Senior Research Fellow at the University of Queensland and a legal practitioner. She served in the Australian Army for several decades as a Signals Officer and Legal Officer. 

This article represents the author’s personal views and does not reflect those of the Australian Defence Force or Australian Government.]

Many States and regional and international organisations have espoused lists of principles they consider necessary for an autonomous weapon systems (AWS), or military artificial intelligence (MAI) systems more broadly, to be considered ‘responsible’ (see, for example, the US Guidelines, the UK principles, the NATO principles and the Chinese, among many more).  Technical standards, such as the IEEE P7000 series, have also emerged to identify how to measure the performance of autonomous capabilities while incorporating ethics and general principles into design.  

What is generally missing, however, is the policy in the middle connecting these two parts: what is the structure in which these principles apply, and what is the requisite (legal) standard attached to that structure then capable of measurement by performance metrics? The articulation of such a process is central to the issue of the meaningful human control debate, which has focused upon questions of if, when, and to what standard, a machine may undertake functions that a human might have previously, but which implicate legal standards.  

This post focuses upon Australia’s proposed approach to bridging this principle-to-practice gap (noting, however, Australia is yet to publicly articulate what its responsible MAI principles might look like). Australia – in a non-paper submitted to the Group of Government Experts on Lethal Autonomous Weapon Systems (GGE on LAWS) in 2019 – explained the concept of a ‘system of control’. The non-paper outlines a concept for applying different control measures and assurance processes for any military use of force, and details how this could operate across the life cycle of AWS in particular. Like other similar approaches, it suggests that the multiple control points across the lifecycle of AWS – from design inception, deployment and retirement of the capability – provides sufficient human oversight to ensure that legal standards are met when the AWS is eventually deployed.  

After describing how the ‘system of control’ is proposed to operate with respect to the adoption and use of MAI, this post will also consider legal challenges and some key risks and benefits of adopting such an approach. It ultimately concludes that this system has utility in furthering the conceptual approach to the meaningful human control of AWS, albeit with need for further refinement. 

What is the ‘System of Control’? 

The system of control offers a methodology to operationalise legal concepts and ensure levels of human control as mandated by law. Separate considerations regarding social and ethical points of delegation to machines could also be addressed through this system of control approach. While imperfect, it offers a more nuanced way to consider whether a MAI is fit-for-purpose in a specific use case, rather than current conceptual approaches that envisage compliance in terms of a single ‘touch’ point. 

Even relatively simple decisions to use force, like a soldier firing a rifle, for example, are subject to controls across the life cycle of that capability: from the direction from government to acquire that particular rifle, assessments that it performs as envisaged, confirmation that performance meets legal standards when used as anticipated, that it is being used by someone who knows how to operate it and what its limitations are; and that when it is authorised to be used, there are sufficient controls on when and how it can be fired.   

In effect, this non-paper articulated the concept that human control of AWS occurs through various different points in the capability life cycle, and in a structured and nested way. This concept also reflects the idea that direct human control is not necessarily required at the point of the delivery of a capability’s effect in the battle space to be lawful. That is, appropriate design choices, and deployment controls could allow for a fully autonomous military capability to undertake lethal strikes, in full compliance with the Laws of Armed Conflict (LOAC), without human intervention past the decision to deploy the system. 

For example, the assessment of whether a strike is proportionate or not, in compliance with Article 57 of Additional Protocol I, requires comparison of concepts such as military advantage compared to anticipated harm to civilians and civilian objects. This calculation is not a mathematical or linear one, and the military value (and necessity) of a particular strike will change depending on the countervailing military situation, the specific time and location in which the strike may occur, and the physical environment in which it is deployed.  The result, applying the system of control approach, is that there may be a control measure put in place at the time of programming/deployment that prevents an MAI or AWS later being used for strikes when these factors are not capable of being adequately assessed.  

Thus, this approach was proposed to be particularly useful in shaping debate about AWS because it allows the discussion about control to address numerous overlapping, but complementary processes and assessment of different questions of lawfulness that will be enlivened at different points within the system. Conceptually, it can be aligned to other systems approaches adopted by Australia in decisions to use force, such as the Australian Defence Force six-step LOAC targeting process, which articulates how conduct of hostilities rules are incorporated into the targeting cycle (explained by Henderson, here).

The system itself contains nine stages, starting at the strategic level, with a set of strategic priorities set by a responsible government. Documents like White Papers and Strategic Reviews, along with AI strategies provide some guidance issued by the State to the military about what to acquire, and the broad operational settings in which it should be considered for use. Particular legal standards are then applied during key phases, such as the testing, evaluation, verification and validation stage; and the legal review stage prior to introduction into service. Ongoing regulation and control are put in place, using typical military processes like Rules of Engagement, Targeting Directives for the system’s deployment, with a feedback loop connecting al stages in the form of After-Action Evaluation. 

Some of these key points across the system implicate specific legal obligations, while others build through a process of regulation to address the state’s overall obligation to ensure the lawful use of force. The system is described as one that may occur over many decades in terms of conception to capability deployment. 

The Challenge of Articulating Legal Standards in a System Design Approach

Plaguing this principles-into-practice problem is the process to actually assess compliance with these principles. The specific problem to be solved is working out how to assess whether the MAI is lawfully able to undertake certain functions without human intervention, in a way that complies with a State’s legal obligations. As aptly put by Jeroen van den Boogaard in his recent OpinioJuris post, determining which ‘decisions in the use of certain weapons systems in certain contexts, … are not to be delegated to algorithms’ is perhaps the more relevant issue in this debate.   

During a recent UQ-hosted Conference (report forthcoming) Australian government, academia, and the defence industry echoed van den Boogaard’s observation: the system, while a useful strawman for conceptualising the measures in place to allow for meaningful human control of AWS, now requires detailed consideration. It must specify the necessary standards to be met by the machine, the interaction between the assurance processes across the capability life cycle, and decisions upon exactly which legal processes can or cannot be delegated to a machine to implement, in response to pre-programmed criteria. That is, a decision must be taken as to which rules of the laws of armed conflict are so contextually based that autonomy cannot account for them in its algorithmic predictions and which part of the implementation of the decision-making has to be made in real time, whether by humans or humans assisted by autonomy.  

For example, in undertaking the legal review of an AWS, as required by Article 36 of Additional Protocol I, a decision must be taken as to whether the AWS meets the standard required, based upon testing data and information provided by its developers. This is not unlike the process required for ascertaining predictability of conventional arms, but there must be a policy decision to determine what amounts to legal compliance for the specific performance of the AWS in a particular setting.  A bullet striking its target, within a certain distance, a certain number of times, in testing, is a somewhat easier conceptual risk assessment. In the case of AWS, there will be multiple decision points about whether aspects of the system perform adequately to reduce the specified risk, such as whether the translation of sensor data by the algorithm meets the standard of identifying the pre-programmed criteria for object(s) that may be struck, and if the AWS can calculate and determine when to abort the strike process if other objects appear within the collateral harm radius of the type of weapon attached to the system.

Further, the interaction between controls will be complex. There is a need to specify across different stages of the system of control how previous limitations on potential use cases are enforced, and how this contributes to the overall assessment of lawfulness of the capability. Again, while this process is undertaken for conventional weapon systems, it becomes more complex when relying on more complex systems. For example, describing in Rules of Engagement and Targeting Directives which use cases AWS may be authorised for, or which MAI can be relied upon during which stages of the targeting process, will necessarily require more granular application than is currently the case for weapon systems not relying on AI. 

What are the Risks and Benefits of a System-Based Approach?

Two key strengths of the system of control approach can also be considered its weaknesses. First, the system itself describes the existing method of systematizing the acquisition process of a modern military, and the further systematization of the use of force in situations of armed conflict.  It is therefore easily complied with, requiring little additional resourcing, and readily understood by those required to implement it.  However, it also risks being adopted in a facile way: if no specific resources are focused on these considerations, and no universal assessment of how the system works considering the specific challenges raised by autonomy, then it is prone to failure.  

Second, utilising a systems approach ensures that the assessment of legal compliance occurs across the capability life cycle, and thus addresses all aspects of the use of the capability within the operating environment in which it has been designed for deployment. This can ensure that the breadth of inputs to the system, which will impact how it operates in practice, form part of the assessment in respect of its lawful use. The risk with this approach however, aligns to criticisms of a system-based approach to other design problems in terms of diffusion of accountability when the system fails. In cases of MAI and AWS, mission failure will likely result in the death or destruction of people and places.  Accordingly, responsibilities must be clearly articulated and delineated across the system, to ensure that the issue of legal compliance is not assumed to be someone else’s problem in the system. While particularly acute in terms of reliance on machines, and well-documented biases of trust that accompanies such reliance, this is a criticism that is arguably levelled more broadly at all modern military processes that adopt such a systems approach.

Conclusion 

A systems design approach to autonomy and AI is a meaningful way to understand and articulate what amounts to acceptable risk in the deployment of AWS and MAI.  However, the significant challenge of articulating what amounts to an acceptable risk in a particular circumstance, still needs work.  The success of this system of control approach will lie in the rigour with which individual standards are set and then nested across the system as a whole; and understanding how each individual risk accepted contributes to the overall risk profile of the capability. Many of these determinations will reflect a State’s policy decision on what acceptable legal risk is – which is an unexplored area in general terms of weapons review and legal compliance, but one that is more acute in the deployment of AWS and MAI.   

Further, it is in the actual application, beyond engineering or design speak, that the real tests are unveiled: creating control measures to ensure that the operator truly understands the system limitations when deploying it in a novel situation, or that adequate legal advice is provided in the design of the system itself. While this approach is, in fact, nothing novel, but merely a reframing of extant military acquisition and deployment processes, it does, however, offer a methodology to operationalise legal concepts and ensure levels of human control as mandated by law for AWS. While imperfect, it is a useful scaffold to which specific legal standards and compliance requirements can now be added. Australia must now fill in those blanks to apply it meaningfully in practice. 

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Autonomous Weapons, Featured, General, Public International Law, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.