Lavender Unveiled: The Oblivion of Human Dignity in Israel’s War Policy on Gaza

Lavender Unveiled: The Oblivion of Human Dignity in Israel’s War Policy on Gaza

[Adrián Agenjo is an LL.M graduate from the London School of Economics and a Researcher at Pompeu Fabra University. He also works at Irídia – Human Rights Defence Centre in Spain.]

The views expressed in this article are the author’s alone and do not represent any institutional affiliation.

Lavender Unveiled: The Oblivion of Human Dignity in Israel’s War Policy on Gaza

The Israeli military campaign on Gaza continues to defy all expressions of human dignity. A revealing journalistic investigation (cited throughout the text) has uncovered the deployment of a sophisticated AI-driven system, known as “Lavender”, which has been instrumental in guiding Israel’s intensive bombing campaigns in the region. The revelation of the Lavender system’s role in these operations marks a significant escalation in the automation of military targeting processes, raising critical ethical and legal questions. This mechanized approach to conflict, while not unprecedented in the arsenal of modern military technologies employed by the State of Israel (take the Gospel or the War Dome systems as examples), has been extensively scrutinized under the lens of IHL. My intention here is not to replicate such legal analysis; rather, I aim to argue how the utilization of Lavender demonstrates the continuation of Israel’s policy of oblivion towards human dignity in the war on Gaza, manifested through two key aspects: depersonalization and the elimination of human intervention in targeting.

Oblivion of Human Dignity through Depersonalization

The principle of human dignity stands as a fundamental pillar in the protection of individuals in modern international law. Echoed in the preamble of the United Nations Charter, there is a declared commitment to “faith in fundamental human rights and in the dignity and worth of the human person”. This ethos is further embodied in Article 1 of the Universal Declaration of Human Rights, asserting that “all human beings are born free and equal in dignity and rights”. The preamble of the ICCPR also refers to dignity as the source of the rights that it covers and, despite not being listed as a substantive right, it is intertwined with other prerogatives. It is also indisputable that the protection of human dignity is one of the main aims of IHRL as well as IHL, and their commonality and synergistic relationship is – at least partially – based on that principle (p. 312). Furthermore, the imperative to uphold human dignity is recognized in several national constitutions (see, for instance, Article 1 of the Basic Law for the Federal Republic of Germany of 1949) and domestic case-law (for a review, see McCrudden and Carozza).

Human dignity, while subject to various interpretations across religious and philosophical doctrines, fundamentally revolves around the notion of the inherent and immeasurable worth of each individual, according to Schlink (p. 632). AI target-selection technologies, such as Lavender, bloodless and without morality or mortality, cannot fathom the significance of using force against a human person and cannot do justice to the gravity of the decision. Unlike human decision-makers, these technologies cannot engage in appeals to humanity or exercise discretion based on contextual, emotional and ethical nuances. Indeed, this is also one of the main concerns referred to a similar systems, such as Lethal Autonomous Weapons, as Asaro synthesizes (pp. 693-704).

The utilization of Lavender reduces individuals to “objects to be destroyed”. The 2.3 million Palestinians in Gaza being subjected to AI surveillance are treated as plagues, as a nuisance that must be gotten rid of, stripping them of their intrinsic dignity. This dehumanization is exacerbated by the statistical nature of the Lavender system’s operations. By assigning every individual in Gaza a rating based on their perceived likelihood of being a militant (see paras. 13-48 of the investigation, referring to “Step 1: Generating Targets”), the system categorizes human lives into numerical probabilities. As soon as they enter the system, they are transformed into bits and data. As the investigation recalls, the operators recognize that “everything was statistical, everything was neat — it was very dry” (para. 33).

Despite alleged “internal checks” revealing a 10% margin of error in Lavender’s calculations, the system operates with clinical detachment. The disregard for the consequences of inaccuracies underscores a systemic failure to uphold the principle of human dignity in the pursuit of military objectives. This undeniably jeopardizes the historically accepted concept of human dignity, which emphasizes that humans may not be treated as objects or means, a notion that is universally shared — even in war (exemplified, for instance, through the prohibition of human shields).

Oblivion of Human Dignity through the Bypassing of Human Judgment

The Lavender system not only raises concerns regarding the violation of human dignity by depersonalizing individuals targeted but also by circumventing human involvement in the targeting process. As individuals are targeted based on pre-set rules and abstract hypotheticals determined by algorithms, the nuanced considerations of individualized circumstances are disregarded. This mechanized approach to decision-making fundamentally undermines the principles of human dignity by depriving individuals of the right to have their fate determined through a deliberative process involving human considerations.

Irrespective of the exigency to make quick decisions during armed conflict against combatants, it does not necessarily follow that such decisions can be made in an abstract or theoretical manner, with no human authorization (as defended by Ulgen, pp. 14-15). The possibility of a deliberative process somewhere down the line, where a change of mind and fate is possible, is almost ruled out in advance by the introduction of the Lavender system, since human control is sacrificed in the process. The research highlights a concerning reality where human personnel serve merely as a “rubber stamp” for the decisions made by AI systems (para. 4), devoting minimal time to verifying targets before authorizing bombings.

In spite of the evident margin of error in Lavender’s calculations, the human oversight focuses on superficial checks, like verifying the target’s gender, rather than conducting thorough assessments of the target’s legitimacy. As detailed in the investigation (paras. 45-47), the supervision protocol before targeting suspected militants involves confirming the AI-selected target’s gender, with the assumption that female targets are erroneous and male targets are appropriate. According to an interviewed official,

“I would invest 20 seconds for each target at this stage and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time. If [the operative] came up in the automated mechanism, and I checked that he was a man, there would be permission to bomb him, subject to an examination of collateral damage” (para. 47).

This reduction of human involvement to a cursory gender verification process reflects a systemic failure to incorporate human judgment.

As Asaro recalls, justice requires a human duty to “consider the evidence, deliberate alternative interpretations and reach an informed opinion”. For instance, irreducible sentences of life imprisonment have been found to violate human dignity since it means “writing off” the person, deciding on a merely abstract basis and not leaving space for change or hope. Certainly, the structure of law and the processes of justice need the presence of a human as a legal agent and this lack of human control leads to an inherent violation of the human right to dignity. In no way can the Lavender system be said to comply with this standard.

Another pressing concern regarding Lavender lies in its opacity, as the algorithms it employs are being shielded from public scrutiny (either due to legal protection or simple inaccessibility). Insufficient accessibility obstructs comprehension and oversight of the targeting process, a critical aspect especially relevant for when international law mandates investigations for violations of its own rules (pp. 199-210). Furthermore, as AI increasingly relies on neural networks, which are particularly deep learning algorithms inherently lacking transparency, users frequently struggle to grasp the underlying reasoning behind each particular decision made by these systems. While some scholars, such as Chehtman, argue that introducing human oversight may not fully address these challenges, this would, at the very least, provide more information on why and how the targeting selection was conducted and finalized.

In this matter, algorithm-based decision making is often touted as objective and impartial, but writing unbiased algorithms is a complex task and programmers may, by mistake or even by intentional design, build in misinformation, racism, bias and prejudice. Potential discrimination is exacerbated by the opacity of the programs and a social tendency to assume a machine-made decision is more likely to be objective and efficient. While there has been significant scholarly and increasingly policy-focused work directed towards solutions for creating fair algorithms, there are no firmly established international standards for audit, accountability or transparency. I do not deny here that there may be (limited) benefits in introducing AI and algorithmic targeting systems in modern warfare, as the ones listed by Heller (pp. 31-49). Nevertheless, none of these advantages can plausibly justify the use of Lavender, a system where all the mentioned concerns are significantly amplified.  

Concluding Remarks

While the world watches the ongoing assault on Gaza, broadcasted globally, and witnesses the profound devastation inflicted upon the Palestinian people, Israel’s deadly imagination regarding destructive and unlimited warfare becomes evident. The Lavender system, characterized by the depersonalization of individuals, emerges as another inventive method of extermination. This dehumanizing tactic erases the intrinsic value and dignity of human life.

As AI and robotics become increasingly integrated into warfare, they have the power to redefine our approach to conflict in the future. However, it is imperative that we question the necessity of proceeding down this path. Instead, the adoption of such technologies should prioritize the protection of human life, aiming to minimize casualties and civilian harm, rather than perpetuating further dehumanization.In the narratives of futuristic books, films, and media, a timeless battle appears between AI and humanity. Yet, what unfolds before us today is a stark departure from fiction — a chilling reality where humanity wields AI as a weapon against itself, against its own kin, or rather, against certain peoples defined by their national origin and belonging to a group. The use of Lavender in Israel’s military campaign on Gaza not only evokes profound ethical and legal concerns but also strikes at the very core of human dignity. By reducing individuals to mere data points and relinquishing human judgment in favour of mechanized decisions, these technologies strip away the essence of humanity, leaving behind a desolate landscape where lives are measured in statistics, devoid of intrinsic worth and value. This serves as yet another piece of evidence of Israel’s lawless and brutal war policy. One must question: what will be the tipping point in humanity’s struggle against its own dehumanization?

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Autonomous Weapons, Featured, General, International Humanitarian Law, Middle East, Technology
No Comments

Sorry, the comment form is closed at this time.