Symposium on Military AI and the Law of Armed Conflict: De-anthropomorphizing Artificial Intelligence – Grounding Notions of Accountability in Reality

Symposium on Military AI and the Law of Armed Conflict: De-anthropomorphizing Artificial Intelligence – Grounding Notions of Accountability in Reality

[Gary Corn is a Professor and Director of the Technology, Security and Law Program at American University Washington College of Law. He previously served as a military attorney in the US Army, including as Staff Judge Advocate (General Counsel) of U.S. Cyber Command.]

When it comes to the use of artificial intelligence (AI) and autonomous weapons in conflict, States have already crossed the proverbial Rubicon and there is no indication they will look back.  By many accounts, AI has played a key role as a decision support tool in the Russia-Ukraine conflict, and as Ukraine’s Minister of Digital Transformation has described, the pull toward deploying AI-enabled, lethal autonomous weapons systems (LAWS) is inexorable and may already be a fait accompli.  Israel is also reportedly leveraging AI-driven analytics in its Gaza operations in ways that have been alternately described as, on one side, truly force multiplying, or on the other, corrosive to legally, morally and ethically compliant operations.  Whatever one’s views, these developments are occurring against the backdrop of an increasingly heated race between the world’s two leading AI powers, the United States and China, to harness the technology for military advantage.  In short, the development, adoption and employment of AI as a military capability is here to stay and moving forward rapidly.  

These developments have drawn greater attention and urgency to discussions around the legal and policy parameters necessary to responsibly guide the development and use of military AI.  The appropriate role of human judgment and involvement in military decision-making and action has long dominated LAWS discussions, with concerns often centering on a perceived accountability gap implicated by blending or outright transferring “human agency” to AI systems; a concern specifically reflected in declarations of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) that “[h]uman responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines.”

While it is a sound policy dictate that humans remain responsible, and by extension accountable for the use of AI in armed conflict, what this means in practice is far from clear.  Setting unrealistic, ambiguous, or misaligned standards of accountability can deter the development and use of AI where its potential benefits can outweigh its risks.  No technology, including weapons and weapons systems, is infallible.  The approach to accountability for unwanted outcomes should be no different than with any other means or method of warfare.            

Hyperbolic calls to ban “Killer Robots” specifically, or militarized AI generally, have gained little to no traction among States.  This should come as no surprise given the potential AI offers to exponentially increase the speed, efficiency, and accuracy of operations and reduce the inherent and infamous fog of war.  The paradox of the so-called DRIP phenomena (data rich and information poor) created by the increasingly ubiquitous availability of large data sets, or what one leading expert has described as a problem of “success catastrophe,” was the driving force behind the U.S. Department of Defense’s Project Maven.  The use of AI to solve the DRIP problem is not only militarily sound, but arguably necessary, when feasible, to ensure compliance with State’s law of armed conflict (LOAC) obligations.

This is not to say that the use of AI for military ends is a risk-free proposition.  The potential implications, especially for the direct weaponization of AI, are immense and demand a measured and principled approach.  As Professor Eric Jensen has noted, “[a]s with earlier weapon systems based on emerging technologies, there is clearly a need for an open and frank discussion among States and caution as research, particularly weaponization, progresses.”  Unfortunately, the evolution and adoption of the technology is rapidly outpacing States’ efforts to meaningfully engage on these issues, with formal processes like the UN Group of Governmental Experts on LAWS focusing narrowly on AWS and failing to achieve consensus on even the most basic issues such as the definition of automation, autonomy or AI.

Setting aside valid questions as to whether military use of AI truly requires new or amended international legal regulation, these stalled efforts have led to several parallel, and at times competing multilateral and unilateral efforts to develop softer norms of responsible AI development and military use; the most recent example being the 52-State endorsed Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy.  

Either explicitly or implicitly, a central tenet of these normative declarations is human responsibility and accountability for every stage of AI development, procurement, and especially deployment and use.  As the Political Declaration notes, “[m]ilitary use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control.”  Similarly, according to the U.S. Department of Defense’s Ethical Principles for Artificial Intelligence, humans “will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities,” and calls for accountability mechanisms are laced throughout its Responsible Artificial Intelligence Strategy and Implementation Pathway.  The U.S. Intelligence Community has followed suit, stating that it will “develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.”  The United Kingdom’s approach is even more emphatic:  “human responsibility and accountability cannot be removed – irrespective of the level of AI or autonomy in a system.”

As Professor Chris Jenks has noted, these general statements “suggest that the Responsibility Principle may be masking a number of questions.”  Chief among them is whether the “principle” suggests a form of strict liability applicable to the decision to use military AI where the LOAC creates no such standard otherwise.  States have never adopted anything approaching a strict liability regime within LOAC because of one incontrovertible fact:  mistakes and unintended outcomes are a deeply unfortunate but, at least to this point in history, inescapable aspect of warfare.  Unintended consequence can result from myriad factors, ranging from incorrect but reasonable human judgments, to superseding, intervening events, to basic weapon system errors or failures.

As reflected in the Political Declaration, AI is neither well-defined nor well-understood.  More importantly, it is not a monolithic technology.  At its core it is a computational tool that is incorporated into other technologies, machines, or processes to enhance their outputs to varying degrees and with different risk and impact profiles.  AI incorporated into a military logistics system, for example, is clearly not the same as AI incorporated into a target identification capability like Israel’s “The Gospel,” let alone into an AWS.

Relatedly, it is generally wrong to look at AI as something sui generis.  The tendency in mythologized discussions of AI to anthropomorphize it corrupts rational conversations about what the technology really is and what it is capable, or incapable of doing.  More than anything, AI is a technological, computational accelerant to whatever function or process it is applied to.  So whether used for medical diagnostics, to predict enemy courses of action, or to improve target identification and execution, it is the implications of the increased speed, efficiency and accuracy of those preexisting human activities, along with risks of error, that should be the focus of regulatory and risk mitigation efforts.

The approach to governance and regulation of AI we chose must reflect our underlying values as a society.  We can see this values-based approach reflected in the various codes of ethical or responsible development and use of AI and their demands that AI be developed and employed consistent with international law.  While the accelerant nature of AI described above may rightfully cause us to reassess how competing interests, like humanity and military necessity, are impacted, the underlying values we use to balance those interests should remain constant and anchor policy choices.  Aligning and defining accountability should be no different.

When it comes to the conduct of hostilities, as a body of law the LOAC is primarily concerned with regulating the conduct of hostilities in accordance with State interests, including protecting civilians from the harms of war.  Nevertheless, it does not, nor could it, erect an absolute barrier to those risks.  Balanced against military necessity and the realities of war, it aspires only to mitigate those risks to the maximum extent feasible.  It does so by setting standards of conduct and measuring compliance through assessment not of outcomes, but of judgments in the execution of attacks and other military operations.  

In this context, reasonable mistakes in judgment do not incur criminal liability.  That is, humans are not expected to account for unforeseeable harms.  Imputing foreseeability to someone who employs an AI system within approved parameters or relies on an AI system certified to accurately process data and present reliable information would mark a significant and unwarranted shift in approach.  Holding a soldier or commander accountable for the consequences of a system malfunction or a possible error previously underwritten through a procurement and weapons review process would be equally wrong.  Even the most precise of precision-guided munitions cannot achieve 100% accuracy or reliability.  Military AI systems should be approached through the same lens.              

The identification, management, and mitigation of risks attendant to the development and deployment of any new technology is always challenging.  New technologies are often complex and difficult for most of us to understand.  Faced with this reality, there is a natural tendency for policymakers to listen to the worst or scariest predictions about new technologies, grossly over assess its risks, and therefore adopt overbroad measures aimed at eliminating those risks.  Overstatement of risk is the first step toward overregulation with all the attendant opportunity costs that entails. 

Admittedly, accountability can come in many forms, and assigning responsibility without concordant accountability is a fool’s errand.  In this respect, military AI presents unique, but not insurmountable challenges.  For example, owing to its self-learning and adaptive nature, what some have dubbed emergent behavior, the Political Declaration calls for continuous vigilance “to ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles.”  Failure to monitor and account for aberrant performance can and should factor into the reasonable foreseeability of LOAC violations.  

The Political Declaration lays out a sound, but flexible set of principles to guide States’ development and use of AI.  But completely eliminating risk in the employment of any military capability is simply not feasible.  So any policy or strategy that makes eliminating risk its goal is doomed to fail.  Not only will it fail to achieve the goal, but it will likely have unintended negative consequences owing to its overbreadth.  Instead, the focus should instead be on mitigating risk to an acceptable level.  General statements that the buck will always stop with some human may assuage anxieties surrounding the mystique of AI, but without better elaboration and appropriate calibration, opportunity costs may be the larger risk on the table.     

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Featured, Symposia, Technology, Themes
No Comments

Sorry, the comment form is closed at this time.