06 Dec Netanyahu and Gallant ICC Arrest Warrants: Tackling Modern Warfare and Criminal Responsibility for AI-enabled War Crimes
[Marta Bo is a Senior Researcher at the T.M.C. Asser Institute and an Associate Senior Researcher at the Stockholm International Peace Research Institute (SIPRI)]
In recent years, accountability for uses of artificial intelligence (AI) in warfare, especially under international criminal law, has progressively emerged as a critical issue in governance initiatives (SIPRI, GGE’s Guiding Principles, REAIM Blue Print for Action) and scholarly and civil society debates (Matthias, Sparrow, HRW, and more recently Heller, Bo here and here). Initially the debate predominantly centered around preventative measures, IHL compliance, and (more recently) on responsible AI development and use, framing the issue as a future concern. However, with AI technologies—particularly AI decision-support systems (AI-DSS)—reportedly being widely deployed in current conflicts (e.g., see Palantir’s Artificial Intelligence Platform (AIP) Defense, Lavender, and here Gospel), it was only a matter of time before their use in targeting and the associated criminal responsibility came under the scrutiny of an international criminal court.
Subject to many ‘ifs’, such as —their enforceability, potential further jurisdiction and admissibility challenges, and future charges — the International Criminal Court’s (ICC’s) warrants of arrest against Netanyahu and Gallant could offer an important opportunity for the Court to examine modern warfare and war crimes involving AI systems. The ICC’s recent warrants of arrest allege, among others, the criminal responsibility, as civilian superiors, of Mr Netanyahu and Gallant for the war crime of intentionally directing attacks against the civilian population in an international armed conflict, under Article 8(2)(b)(i) of the Rome Statute.
Determining superior responsibility for this crime will require an assessment of whether certain attacks, as detailed in the arrest warrants, violated the principle of distinction. Such an assessment might include the scrutiny — among other factors — of the uses of AI-DSS targeting systems. I will first address this issue, then examine the alleged violations of international humanitarian law (IHL) that are the basis for the war crimes, and finally explore some questions that might arise in proving superior responsibility in these cases.
An Opportunity for the ICC to Look into the Use of AI-DSS Systems in the Conduct of Hostilities
Although it remains speculative until charges are made public and (potentially) additional charges brought forward, the ICC’s involvement in Netanyahu’s and Gallant case could mark the first time that the ICC has to confront war crimes involving the use of AI in the conduct of hostilities. AI-decision support tools have been integrated into military operations by Israel in Gaza to assist in military decisions making. AI targeting systems analyse different sources of data and process them at speed. In this case, the ‘assistance’ provided by these systems included target generation and nomination at unprecedented scale and speed.
The information released around the Gospel and Lavender systems as well as related research in AI-DSS (see Stewart and Hinds, Klonowska here and here, Woodcock, Nadibaidze et al and Greipl et al), shed some light on how these systems work, are used and connected risks.
In a nutshell, Gospel reportedly processes data (drone footage, satellite images, human intelligence, reports from previous operations) to provide Israeli forces suggestions for buildings that might be potential military objectives (“private residences where people suspected of being Hamas or Islamic Jihad operatives live”). Lavender, based on machine learning processes, provides recommendations about human targets, i.e. “the likelihood that each particular person is active in the military wing of Hamas or PIJ” using a scale from 1-100’).
Investigations by +972 reveal that during the initial two weeks of the 2023 war, Israeli Defence Forces (IDF) intelligence personnel conducted manual sample checks of targets identified by Lavender system. These checks reportedly confirmed a 90% accuracy rate in the system’s target identification. Following this, Lavender-generated target lists were treated as directives, no longer requiring cross-verification by military personnel. Prior to this shift, soldiers spent approximately 20 seconds on each target to confirm basic details, such as verifying the target’s gender as male.
Although information about how these systems work and have been used—whether for target generation or target validation—is limited, and the available details must be considered with caution, the Court might need to examine some key features of AI-DSS and their use. These include technical aspects, such as accuracy rates, and the understandability on the part of users of the outputs and target recommendation generated, as well as other human-machine interaction features that have shaped how these systems are used to inform targeting decisions, including automation bias and action bias.
Together with other elements, such as the choice of weaponry, the ICC will need to consider whether the attacks resulting from targeting decisions taken on the basis of AI-DSS amount to violations of IHL conduct on hostilities rules, which form the basis for the alleged criminal responsibility for war crimes.
Violations of the Principle of Distinction as a Basis for the Alleged War Crimes
The Court may need to consider whether certain attacks, potentially involving the use of the Gospel and Lavender systems, constitute violations of IHL. The Court has chosen to focus on the violation of the principle of distinction and on the charge of intentionally directing attacks civilians. This is a welcome step. While much analysis of Israeli attacks has centered around violations of the principle of proportionality (Manea, Abraham), as noted by Daniele “[m]any analyses seem to have so far unduly overexpanded proportionality into the realm of distinction, contributing to the current failure of most IHL and international criminal restraints”. Given the mass killing of civilians, and probably also due the known intricacies of establishing criminal responsibility for the war crime intentionally launching of disproportionate attacks (Art. 8(2)(b)(iv)), the Court has aptly focused here on violations of the principle of distinction.
This will require an inquiry into both the violation of the principles of distinction and discrimination. The former mandates that the parties to an armed conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and must accordingly direct their operations only against military objectives (Art. 48 AP I). The latter prohibits those attacks which are not directed at a specific military objective, or which employ a method or means of combat which cannot be so directed, or which employ a method or means of combat whose effects cannot be limited as required by AP I, and which, in each case, are of a nature to strike military objectives and civilians or civilian objects without distinction (Art. 51(4) AP I).
While violations of the principle of distinction are explicitly criminalized in Art. 8(2)(b)(iv), indiscriminate attacks (as violation of the principle of discrimination) are not directly criminalised by the Rome Statute. However, the ICC has addressed this gap by subsuming indiscriminate attacks under the war crime of directing attacks against civilians (see Katanga at 802, Ntaganda at 921). These attacks refer to (or entail) a lack of distinction between lawful and unlawful targets and/or to the employment of weapons or other means that are indiscriminate in nature. Indiscriminate attacks are, for example: attacks that are not directed against a specific military target; attacks employing indiscriminate weapons (i.e. weapons incapable of distinguishing between civilian and military targets) and attacks carried out without taking the necessary precautions to spare civilians, especially failing to seek precise information on the targets to be attacked.
Although closely related, these violations differ. While ‘the prohibition on attacking civilians or civilian objects is concerned with the identity of the target of attack, i.e., whether the target is a civilian or civilian object’, the principle of discrimination is concerned with the specificity of the target and the effects of attack, and is violated in cases of ‘absence or impossibility of a specific target or the impossibility of limiting the effects of the attack to its target’.
In other words, as pointily put it by Daniele: this provision ‘reminds lawyers and armed actors that distinction does not only prohibit direct and deliberate attacks on civilians and civilian objects, it also prohibits attacks renouncing to discern between them and lawful targets’.
Finally, it has to be noted that not only the actus reus, but also the mens rea might differ. As I have discussed elsewhere, direct attacks against civilians typically require mens rea in the form of first and foremost dolus directus in the first and second degree, whereas indiscriminate attacks are more often associated with lower knowledge-based or risk-taking mental states.
Proving Superior Responsibility in relation to targeting decisions informed by AI-DSS
Mr Netanyahu and Mr Gallant allegedly hold criminal responsibility as civilian superiors under Article 28(2) of the Rome Statute.
Superior responsibility is a well-established principle in international criminal law, holding military and civilian leaders accountable for crimes committed by their subordinates if they had effective control over them and failed to prevent or punish them. Civilians leaders are held responsible under this doctrine when they ‘knew, or consciously disregarded information which clearly indicated ’ that their subordinates were committing or about to commit crimes ‘within the effective responsibility and control of the superior’.
I would like to focus here on the underlying crimes.
The applicability of superior responsibility has been widely discussed in expert literature in relation to the use of autonomous weapon systems (AWS) (Spadaro, SIPRI, Buchan and Tsagourias, Heller, Kraska) and considered one useful accountability avenue for war crimes. Here the issue is different: the ICC might have to decise on superior responsibility for crimes committed with the use of AI-DSS, and not AWS. Unlike the debate over whether superior responsibility applies to crimes committed by AWS, potentially considered as subordinates, a a key requirement for the ICC to establish superior responsibility is that human subordinates must have committed an underlying offense. In fact, ‘superior responsibility is of derivative nature, meaning that it arises only if the subordinates committed a crime’ (Spadaro). For superiors to be held responsible, the subordinate must commit an underlying offence in all its constitutive elements (Mettraux, 131). The commission of a criminal offence by the subordinate is a condition of the applicability of the doctrine, and a condition triggering the jurisdiction of the ICC or any other criminal court before which such charges are brought (Mettraux, 132). It is thus necessary to establish the actus reus and mens rea of the underlying crimes committed with AI-DSS.
To this end, some core questions to be addressed, especially in relation to mens rea:
- will the ICC consider only dolus directus in the first and second degree, or also the knowledge and acceptance of the (highly probable) risks of attacking civilians (dolus eventualis) as sufficient mens rea of the underlying crimes (see some of reflections here)? Does a 10% error rate inherently imply knowledge that some civilians could be at risk of attack and acceptance of that risk?
- to what extent failures to take feasible precautions in attack (for example just a few seconds for target verification, the speed of target generation, the choice of weapons) will lead to proof of intent and/or knowledge?
- should AI’s unpredictability/accuracy rates be factored into determinations of criminal responsibility, particularly when considering the ex ante knowledge that some of these predictions were incorrect, and is this proof of knowledge and intent?
- Will the ex ante knowledge on the part of users of AI-DSS about the 10% error rate be considered relevant to the proof of intent?
- Was this error rate known to civilian superiors? Will it be considered proof of knowledge about war crimes being committed?
- Will the ex ante knowledge about the AI-DSS’s 10% error rate preclude any defence of mistake of fact which negates intent? Mistake of fact is a ‘false or erroneous representation of a fact, as a result of which the actor wrongly assumes to be behaving lawfully’. Given a 10% error rate, it seems unlikely that a defendant could credibly argue ignorance of the potential civilian status of some targets.
- Additional questions regard whether by directing or encouraging the maximization of targets and fostering a high-pressure environment focused on rapidly generating more targets actors in superior positions created the conditions that enabled the use of AI-DSS in ways that facilitated IHL violations (as noted by Woodcock)? Could their guidance at political and military levels have contributed, potentially with causal effects, to these crimes so to warrant more culpable forms of responsibility under Art. 25 of the Rome Statute?
These are some of the questions that the ICC must address as it navigates the complexities of determining the responsibility of superiors for crimes committed by subordinates employing AI-driven targeting technologies.
A Key Opportunity for the ICC to Engage with AI in warfare
Although it remains speculative until charges are made public and (potentially) further attacks are charged, the arrests warrant for Netanyahu and Gallant could provide an opportunity for the ICC to engage with modern warfare. Whether and how the ICC will choose to grapple technological features of AI targeting systems is an open question. Will they engage with relevant technological aspects? How will the ICC acquire the necessary expertise to understand these?
This engagement could potentially offer clarity on how responsibility should be attributed in war crimes cases involving AI targeting technologies and how the current legal framework account for the challenges that arise at technical level (opacity, error rates) and from human-machine interaction (among others: speed of targeting and automation bias).
International criminal courts have always had a pivotal role in specifying some IHL concepts and rules, and now it could be the time for the ICC to look into modern warfare and violations at conduct of hostilities with AI-DSS. Ultimately, some of its considerations could also be relevant beyond this. Although ICL is not the primary framework for regulating AI in warfare, the ex post scrutiny – through accountability assessments (and its potential impossibility due to technical characteristics and forms of human-machine interaction that preclude the ability to attribute responsibility for harmful incidents ) – remains significant as it could give some directions as to ‘what is permissible and what should not be allowed’ and ultimately the trajectory of technological development in warfare that the industry, policymakers and we as a society want to embrace (Acquaviva, Boulanin and Bo).
Leave a Reply