Symposium on Military AI and the Law of Armed Conflict: Introduction

Symposium on Military AI and the Law of Armed Conflict: Introduction

[Lena Trabucco is a visiting research fellow at the Stockton Center for International Law at the US Naval War College and a fellow in the Technology, Law & Security program at American University Washington College of Law.]

[Magda Pacholska is a Marie Sklodowska-Curie postdoctoral fellow at the Asser Institute (The Hague), and a research fellow with the Tech, Law & Security Program at the American University Washington College of Law. She is also the Managing Editor of the Military Law and the Laws of War Review (Edward Elgar).]

Even the most casual observers of the military technology space are sure to have encountered the term ‘responsible artificial intelligence.’ Responsible AI (RAI) has quickly emerged as a defining feature of the development and future use of AI-enabled military technologies. While it is difficult to argue against the utility and imperative of RAI, what it practically means and what steps are necessary to achieve it remain elusive. With the public debate on military AI being highly polarized and the academic research often siloed, we hope this symposium will allow for fruitful cross-fertilization of approaches and ideas. 

The goal of this symposium is, therefore, threefold. First, we wanted to gather a diverse and interdisciplinary group of experts to present thoughts and insights into the opportunities, challenges, and contours of RAI from their own backgrounds. The submissions did not disappoint. Just as we hoped, the pieces represent a broad scope of concepts, considerations, and actions that encompass responsible military AI and offer new frameworks and perspectives for policymakers and legal scholars to consider. The submissions represent different, sometimes contradictory, perspectives – but it was precisely our goal: to gather a variety of approaches and present them to readers who might not otherwise come across them. We encourage the OJ readers to stay with us throughout the week and read all the posts, even if, somewhere along the way, they disagree with the opinion presented in some of the pieces.

The second, admittedly intertwined, goal of this symposium is to expand the discourse on military RAI to include legal, policy, operational, and technical considerations. There is no one-size-fits-all approach to RAI, and it is imperative to include leading voices from each of these groups to ensure responsible AI is comprehensive and dynamic. 

The third goal is to set the stage for next steps. Last year, there were numerous significant events on the agenda for military AI, or emerging tech more broadly, and we hope to inform thinking going into these events. The 2023 REAIM conference, hosted by the Netherlands and the Republic of Korea, had a call to action for developing responsible AI. The next installment of REAIM will take place in September 2024 in Seoul, and we hope insights from this symposium can shape our understandings in advance of this important meeting and others like it. 

The symposium kicks off with a pragmatic reflection on ‘A Risk Framework for AI-Enabled Military Systems’ by Lieutenant General (Ret.) Jack Shanahan, who builds on his extensive military experience to suggest a five-tier risk hierarchy for developing and employing AI in military contexts. The second post from Rebecca Crootof shares her insights and experience with the U.S. Defense Advanced Research Projects Agency (DARPA), explaining the inner workings, current priorities, and the utility of exploring very early stages of development at DARPA. The third post by Tsvetelina van Benthem zooms in on the problem many scholars, the undersigned included, consider to be the crux of the RAI – namely, the risk of unintended engagements and their legal reverberations. The fourth post continues with Guangyu Qiao-Franco and Mahmoud Javadi examining geopolitical rivalries and the inherent challenges that regulation of military AI faces in the global landscape. The fifth post showcases a domestic approach to RAI – Lauren Sanders elucidates the Australian concept of ‘system of controls’ and the risks, benefits, and lessons it entails for the larger global community. The sixth and seventh posts take a closer look at AI-enabled decision support systems: Marta Bo & Jessica Dorsey focus on the consequences of DSS ‘unregulation’ and the risks of speeding up the decision space with AI, while Georgia Hinds considers legal implications for the design and use of these tools in armed conflict. The following three posts examine various deontological concerns military AI might arguably raise. Ingvild Bode & Anna Nadibaidze argue that human-machine interaction may fundamentally change the quality of human agency, Anna Greipl advances a claim that a better understanding of the role of emotions could mitigate some regulatory challenges surrounding AI today, and Jimena Viveros asserts the importance of looking at AI-powered drone swarms as weapons of mass destruction. Finally, the symposium closes with Gary Corn’s take on the accountability-related challenges military AI purportedly raises. 

Clearly, the contributors cover impressive ground and raise the most pressing issues facing the implementation of RAI in the military domain. We hope the readers enjoy this symposium and find that the contributors expand their understanding of RAI. 

Print Friendly, PDF & Email
Artificial Intelligence, Autonomous Weapons, Featured, General, Public International Law, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.