Inhuman-in-the-loop: AI-targeting and the Erosion of Moral Restraint

Inhuman-in-the-loop: AI-targeting and the Erosion of Moral Restraint

[Neil Renic is a Researcher at the Centre for Military Studies at the University of Copenhagen, focusing on the changing character and regulation of armed conflict, and emerging military technologies such as armed drones and autonomous weapons.

Elke Schwarz is Reader in Political Theory at Queen Mary University London, specialising in ethics of war and ethics of technology with an emphasis on unmanned and autonomous / intelligent military technologies and their impact on the politics of contemporary warfare.]

A recent report by the Israeli magazine +972 on the use of Artificial Intelligence systems for targeting operations in Gaza by Israeli Defense Forces (IDF) has shocked many and reignited fears of a dystopian future of AI warfare. As the Guardian wrote on December 1, 2023, the IDF’s likely deployment of an AI platform in the current conflict, evocatively named the ‘Gospel’, “has significantly accelerated a lethal production line of targets that officials have compared to a target production ‘factory’”. 

The volume and character of such violence raises important questions about Israel’s increasingly criticized approach to battlefield targeting. It should also, however, encourage reflection on the morality of our growing dependence on AI in war.  

Israel’s use of AI in military operations is not a new revelation. The 11-day conflict in 2021 was dubbed by the IDF as “the world’s first AI war” and Israel’s elite intelligence unit 8200 has been increasingly forthcoming about the AI-targeting systems it has in operation. Nonetheless, in light of the staggering Palestinian death toll in the wake of the October 7 attacks by Hamas, reports of an increased Israeli reliance on AI-powered targeting systems is noteworthy, not least because it sheds light on the possible and likely future of AI warfare.  

The IDF has at least three additional AI systems in use, with equally expressive names like ‘Alchemist’, ‘Fire Factory’ and ‘Depth of Wisdom’, which collect and analyze large amounts of signal intelligence, geographical intelligence, human intelligence and data from other surveillance sources. The systems are layered to work in tandem to produce targets at speed and mass scale – hundreds of (ostensibly high quality) targets that can be actioned within minutes. This, proponents claim, will enable more effective and precise military strikes, enhancing mission effectiveness while mitigating civilian harm. 

Israel’s actual use of these systems, however, reveals a more brutal reality. Contrary to the humane and restrained vision of AI-war so often marketed by its architects, Israel’s Gospel system exposes the potential of AI to facilitate systems of unjust and illegal “mass assassination”. 

It is worth noting that Israel’s “Gospel” does not constitute a fully autonomous lethal weapon system as such, which means that the decision to take out a suggested target rests with a team of human operators, lawyers and other decision makers. For some, this retention of humans within ‘the loop’ is a comfort, providing a necessary safeguard against fully robotic and dehumanized killing. Such comfort is misplaced. What matters in such systems is not “humans” per se, but rather humanity – a commitment to decency and restraint, and rejection of overly expansive categories of targetable enemies. 

AI-enabled targeting systems, even those that retain humans-in-the-loop, generate significant moral challenges. Such systems, by design, draw on vast volumes of data; virtually guaranteeing an opaque process of data crunching, analyzing, and target proposition. Human operators are unlikely to have a clear overview of what data such systems have available, what they are trained on, what the specific parameters are for the algorithmic calculations, what the success rates are or how the benchmarks are set for these success rates, or what the frequent updates to these system does to their accuracy in determining and targeting combatants. These same humans typically privilege action over non-action in a time-sensitive human-machine configuration. This has likely proven true in the latest Israeli campaign against Gaza. Gospel has been favored for its capacity to generate targets almost automatically, at rates that far exceed those previously possible. Target expansion, not refinement, is the point and outcome of such systems. 

Israel’s military campaign against Gaza has fallen radically short of the most basic moral and legal standards of war – at the time of writing over 18,000 Palestinians have been killed by the IDF, with tens of thousands more wounded and displaced. This is a humanitarian catastrophe brought about by a military that has explicitly renounced a more restrictive approach to targeting. In advancing this criticism, however, it is also worth questioning just how restrained AI-enabled warfare can be?

A core appeal of systems such as Gospel is speed – targets are created at an unprecedented pace and scale. How meaningful can human control be, and remain, within such a process? In order to operate at an effective standard, humans would need to be trained to understand the AI system – its parameters, its strengths and weaknesses in the complex and unpredictable setting of war, and, importantly, its iterative processes. How confident can we be that humans within such systems will retain the inclination, or even the capacity, to verify the appropriateness, accuracy and relevance of target decisions? 

In a recent article, we raise precisely this concern in the context of human-AI configurations in war. We argue that the systematic mode of killing enabled by such AI-powered targeting systems facilitates an unjust erosion of targeting standards and morally devalues those subjected to violence. As we further argue, however, within such systems, the moral agency of those who practice violence is also at risk.  

To highlight these risks, we draw on Herbert C. Kelman’s work on mass atrocities. He recognised that a “historically rooted and situationally induced” hostility – often along racialized lines – forms a substantive element in systematic mass killing. The evidence of this in the Israeli response to the October 7 attacks is extensive. As Kelman further advises, however, other factors are also relevant in explaining the loss of moral inhibitions against violence. In his 1973 work on collective violence, he identified “authorisation”, “routinisation”, and “dehumanisation” as important contributors to the weakening of moral restraint. 

The first of these, “authorization”, relates to situations “in which a person becomes involved in an action without considering the implications … and without really making a decision” (p. 38) – in other words, deferring to authority. Through authorization, control is surrendered to authoritative agents bound to larger, often abstract, goals that “transcend the rules of standard morality” (p. 44). For those tasked with the actual delivery of violence, agency is lost, or delegated, to central authorities, who in turn, cede their authority to still higher powers. 

The second process Kelman highlights in the erosion of moral restraints is routinization. Whereas authorization overrides otherwise existing moral concerns, processes of routinization limit the points at which such moral concerns can, and will, emerge. Routinization has two functions: first, it reduces the necessity of decision making, thus minimizing occasions in which moral questions may arise; and second, it makes it easier to avoid the implications of the action since the actor focuses on the details rather than the meaning of the task at hand.

The third process, and the one that arguably connects most closely with the target objectification already discussed, is dehumanisation. Processes of dehumanisation work to deprive victims of their human status; “to the extent that the victims are dehumanised, principles of morality no longer apply to them and moral restraints against killing are more readily overcome” (p. 48). Importantly though, the same processes that degrade the moral status of the victim may also dehumanise perpetrators. 

Israel’s Gospel system provides a stark warning of the harms possible when a disregard for battlefield restraint becomes actualized through AI-infused systems of industrialized mass killing. But we shouldn’t delude ourselves – correcting the former will not automatically free us from the challenges inherent to the latter. AI-enabled targeting systems, fixed as they are to the twin goals of speed and scale, will forever make difficult the exercise of morally and legally restrained violence.

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Featured, General, International Humanitarian Law, Public International Law, Technology
No Comments

Sorry, the comment form is closed at this time.