Symposium on Forensic and Counter-Forensic Approaches to International Law: Human-in-the-Loop or Human-in-the-Dark? Algorithmic Targeting and the Evidentiary Collapse of Genocidal Intent in Gaza

Symposium on Forensic and Counter-Forensic Approaches to International Law: Human-in-the-Loop or Human-in-the-Dark? Algorithmic Targeting and the Evidentiary Collapse of Genocidal Intent in Gaza

[Madeeha Majid is a legal consultant with OpenNyAI – Agami and an international lawyer based in Srinagar, Kashmir]

On 19 July 2024, Judge Dire Tladi, in his powerful declaration to the Advisory Opinion on Legal Consequences arising from the Policies and Practices of Israel in the Occupied Palestinian Territory, including East Jerusalem, emphasized that Israel’s policies and practices reveal ‘a clear intent to dominate the Palestinian population.’ Yet, as this war increasingly intersects with Artificial Intelligence-driven warfare – particularly Israel’s reported use of algorithmic targeting systems like Lavender – the legal terrain for proving genocidal intent has become increasingly unstable. These systems combine advanced algorithms, machine learning, and real-time data analysis to carry out military operations with a high degree of autonomy.

Establishing genocidal intent is a key factor in the prosecution of genocide, wherein attributing dolus specialis, or the specific intent to destroy a group in whole or in part, is essential. This may be done by direct evidence, that is, statements by high command or official documents, or inferred from patterns of conduct.

But what happens to this ‘intent’ when kill decisions are generated by algorithms, executed at scale, and rarely subjected to meaningful human review? What happens when the ‘human-in-the-loop’ becomes nothing more than a rubber stamp?

Algorithms and the Displacement of Human Judgment

It is no exaggeration to say that current AI systems do not merely respond to reality: they create it. Israel’s Lavender – an AI based decision-support system (DSS) – generates ‘kill scores’ based on data inputs, often labeling Palestinians as targets through vague or sweeping criteria. These targeting recommendations are amassed by aggregating data from predictive policing software, biometric databases, social media, and surveillance feeds, treating these (often flawed) inputs as objective signals.

With a high risk of misclassification, this results in civilians being targeted not for who they are (civilians v. combatants), but for how a machine interprets patterns. Though Israeli officials claim that (essential) human approval remains in place, accounts suggest this ‘oversight’ is reduced to seconds-long checks, barely reflective of independent judgment.In practice, therefore, these AI systems personalize warfare by combining surveillance data to predict behavior, status, or targetability, often without meaningful safeguards.

As the International Committee of the Red Cross (ICRC) warns, a human signature does not guarantee genuine oversight if the person cannot meaningfully question the algorithm’s logic or understand how the outputs (in this case, kill scores) are generated. It has been observed that under conditions of time pressure, humans tend to defer to machines – a phenomenon known as automation bias. Given this, AI systems should not be the final authority in making lists balancing life and death. Yet in Gaza, these systems run openly even as they have been tied to accelerating civilian deaths at a staggering scale.

This has led to many states arguing for a ban of these AI systems that target people, noting that this is not simply human decision-making which is enhanced by technology, but is in fact a delegation of lethal authority to a weapon system, which undermines human agent accountability and raises profound concerns, especially from humanitarian, human rights, legal, security, technological, and ethical perspectives.

Importantly, under international humanitarian law (IHL), including Article 36 of Additional Protocol I, all new weapons and means of warfare must be reviewed for compliance with legal principles such as distinction and the Martens Clause. IHL also demands constant monitoring of distinction and proportionality, all standards that AI cannot meet. In 2024, the UN Secretary-General warned that delegating lethal decisions to AI erodes both accountability and humanity in war. The ICRC also noted that DSS risks reducing humans to box-tickers. In Gaza, that risk is already real.

The Black Box Problem and Evidentiary Rupture

Major concerns stem from the opacity of these AI systems, which are shielded by secrecy, and even operators may not understand how an output is generated. This is termed the Black Box Problem, where we may ‘know the inputs and outputs but can’t see the process by which a system turns the former into the latter.’

This opacity has legal consequences: if investigators cannot access logs, training data, or algorithmic reasoning, they cannot prove whether genocidal intent was present, manipulated, or disguised, and therefore, unlawful harm may be dismissed as ‘blameless,’ like a malfunctioning missile, rather than a result of policy or intent. In practice, the evidentiary architecture collapses.

Matthew Scherer also cautions that the autonomy, opacity, and unpredictability of certain AI systems can erode foundational concepts such as attribution, control, and responsibility. According to the Rome Statute and ICJ jurisprudence, genocidal intent must be specific, demonstrable, and tied to identifiable actors. But with algorithmic targeting, who carries that intent? The engineer who wrote the code? The scientist who collated the datasets? The commander who approved its deployment? The operator who clicked ‘confirm’? Or does intent dissipate entirely within a techno-bureaucratic assemblage?

While multiple actors may share criminal intent under a common design, Joint Criminal Enterprise (JCE) liability requires that all participants act in furtherance of a shared criminal plan and possess both the intent to commit the core crimes and any specific intent required by law. In the case of AI-mediated violence, where such intent is difficult to attribute, there have been urgent calls to facilitate traceability and attribution to all human decision makers.

Paola Gaeta highlights that when autonomous systems strike, both state and individual responsibility are strained. Beyond obscuring intent, they complicate causation, disrupting the evidentiary chain linking intent, conduct, and outcome. This raises serious challenges for meeting the evidentiary thresholds required for genocide prosecutions under existing international criminal law.

Can AI Commit Genocide?

Legally, AI remains a ‘tool’ under the command of human actors, who may, in theory, be held accountable under doctrines like command responsibility. However, international criminal law requires intent, not merely outcome. As a result, even large-scale atrocities may escape liability if no individual actor can be shown to possess genocidal intent. This legal gap – mass violence absent provable intent – strikes at the core of genocide prevention.

Genocide is not a single act but a process, and in Gaza, that process is being executed through machines as part of a broader campaign of destruction. AI systems may not possess intent, but they can arguably be said to function as a mid-level perpetrator: not deciding to kill, but systematically operationalizing the intent of those who do.

The UN Commission of Inquiry has already found “reasonable grounds to believe” that Israel’s targeting practices amount to collective punishment and possibly crimes against humanity. If AI systems (like Lavender) are central to those practices, they cannot be dismissed as neutral tools. They are part of the infrastructure of atrocity.

Counter-Forensics and the Reconstruction of Intent

Here, counter-forensics becomes critical. The 2024 Anatomy of a Genocide report mapped destruction across Gaza through satellite imagery, survivor testimony, and geospatial data, demonstrating patterns of systematic targeting of civilians. This shows that even if algorithms obscure individual culpability, collective outcomes still reveal discriminatory violence.

Journalists, OSINT researchers, and survivors are constructing counter-forensic archives of algorithmic violence, reconstructing intent not through confessions, but through patterns of destruction.

In other conflicts, such tools have been pivotal. In ICC cases like Lubanga and Al-Mahdi, geospatial imagery helped verify systematic targeting. Similarly, the UN Fact-Finding Mission on Myanmar exposed the role of big tech (particularly Facebook) in the Rohingya genocide, showing how Meta’s algorithms amplified hate speech. Alongside witness testimony, the Mission relied on counter-forensic methods to demonstrate the platform’s role in inciting violence.

Similarly, objective evidence – such as that produced by Forensic Architecture using advanced geospatial tools – can be critical in challenging denialist narratives and supporting legal investigations in Gaza. A spatial analysis of Israeli military conduct since October 2023 may indicate that attacks are not random but systematically orchestrated to produce cumulative and compounding harm. This would undermine claims that AI tools and kill lists are used to minimize civilian casualties, instead pointing to a deliberate policy of targeting civilian infrastructure, potentially amounting to a coordinated campaign of extermination.

Notably, these practices do not replace the law, but demonstrate how intent can be pieced together even when scattered across lines of code, data servers, and fragmented chains of approval. In this light, Gaza is not only a humanitarian catastrophe but a testing ground – compelling a fundamental rethinking of evidence, intent, and accountability in the era of digital warfare.

Naming the Violence, Rethinking the Law

Judge Tladi’s warning now confronts a stark and unprecedented reality: we have entered a dangerous phase in which weapons systems powered by artificial intelligence are actively being deployed in the commission of genocide in Gaza. In this context, genocidal intent does not disappear, but risks being obscured within layers of technical infrastructure. In such a context, naming the violence is not optional: it is a necessary act that triggers accountability.

In Gaza, the genocide is written not just in policy, but in algorithmic code. The question of accountability must now extend beyond ‘who gave the order?’ to include ‘who designed the system?’. Failures/misuses of such systems may implicate a variety of actors. Yet attribution becomes increasingly difficult when those actors are far removed from the actual attack. Still, High-Risk AI systems must not be allowed to obscure accountability behind the façade of automation

The crisis in Gaza makes clear that, without new evidentiary tools and legal adaptations, the human is not in control, but in the dark. The dystopian scenario of killer algorithms is no longer speculative – it is unfolding in real time, with humans nominally in the loop, and machines increasingly at the forefront of mass violence and atrocities. 

Print Friendly, PDF & Email
Topics
Featured, General, International Humanitarian Law, Symposia, Technology, Themes
No Comments

Sorry, the comment form is closed at this time.