Artificial Intelligence in the Battlefield: A Perspective from Israel

Artificial Intelligence in the Battlefield: A Perspective from Israel

Dr. Tal Mimran is an associate professor at the Zefat Academic College, and the Academic Coordinator of the International Law Forum of the Hebrew University. He is also a fellow at the Federmann Cyber Security Research Center in the Law Faculty of the Hebrew University, and the head of a research program on digital human rights at Tachlith Institute. 

Gal Dahan is a researcher at the Tachilit Policy Center. He is also studying a master’s of law (LLM) and co-editor of ‘Hukim’ Law Review at the Hebrew University of Jerusalem. Gal also serves as a coach for the Israeli team participating in the 2024 Jessup International Law Moot Court Competition.]

Israel is unique in its willingness to openly discuss its use of AI-based tools on the battlefield. Recently, high-ranking officers in the Israel Defence Forces (IDF) acknowledged the growing use of AI-based tools as part of Israel’s military arsenal, and this trend is also evident during the Israel-Gaza war of 2023-2024, in which it can be seen how the IDF deploys AI-based systems for defensive needs, command and control, collection, processing, and management of data, and for offensive purposes. 

On the one hand, it can be said that the introduction of AI-based tools to the military sphere can have value, in terms of improving existing capabilities. On the other hand, the introduction of novel tools that are not regulated by international law raises considerable legal and moral questions and exacerbates even more the complexities of warfare – ‘the province of uncertainty’. In an attempt to contribute to the growing literature evaluating these two sides of the spectrum, and all that lies in between them, this blog will look into the experience in Israel, as a test case, to reflect on the proper way to move forward.

The AI Trend in the IDF

The State of Israel is a tech-savvy actor in the technological field, and it harnesses its capabilities as part of its diplomatic toolbox to establish itself as a leader in the design of international tech governance. The need for technological supremacy of Israel derives from the threats it faces, as is evident with increase in cyber-attacks against targets in Israel, especially during the 2023-2024 Israel-Hamas war.

As can be learned from the experience of Israel, the global and widespread integration of AI, exacerbated by generative AI tools, has reached the military domain. In particular, the IDF employs AI applications in its: (1) Proactive Forecasting, Threat Alert, and Defensive Systems; and (2) Intelligence Analysis, Targeting, and Munitions. This trend increased during the Israel-Hamas war of 2023-2024, inviting the consideration of the role of international law in this phenomenon.

Proactive Forecasting, Threat Alert, and Defensive Systems 

AI-based tools can detect, alert, and occasionally preempt catastrophic scenarios and contribute to effective crisis management. As such, just like NATO, the IDF harnesses AI technologies to improve disaster response (e.g., through analysis of aerial images to identify risks and victims). One notable system in use is the Alchemist System, which seems to possess both defensive and offensive capabilities. This system integrates data onto a unified platform and possesses the capacity to identify targets and promptly inform combatants of threats, like suspicious movements. This system was deployed already in 2021, during “Operation Wall Guardian”.

Furthermore, the Iron Dome is an Israeli missile defense system known for its life-saving capabilities in safeguarding critical infrastructure against the threat of rockets launched into the territory of Israel. In the 2023-2024 Israel-Hamas war, this system allowed the low number of casualties notwithstanding rocket launches from Gaza, Lebanon and other areas (like Syria and even Yemen), in the face of a wide range of threats, like drones and other small, low-flying objects.  

Another defense system known as “Edge 360” has been developed by Axon Vision. This AI-based system, installed within armored vehicles currently operational in Gaza, detects potential threats from every angle and promptly alerts the vehicle operator. Finally, the IDF is also using AI in the service of border control. The October 7 attack raised several red flags concerning this system, and highlighted the fact that technological tools can, at the end of the day, only complement human capacity. 

Intelligence Analysis, Targeting, and Munitions 

Integrating AI-based tools to analyze high volumes of data is essential to deal with the overwhelming influx of data that characterizes the modern battlefield. One of the DSS the IDF used is the “Fire Factory,” which can analyze extensive datasets including historical data about previously authorized strike targets, enabling the calculation of required ammunition quantities, the proposal of optimal timelines, and the prioritization and allocation of targets. Operationally, it is an amalgamate of phase 2 (target development) and phase 3 (capabilities analysis) of the targeting cycle

Another system that has stirred recent controversy is the Gospel, which helps the IDF military intelligence division improve recommendations and identify key targets. Already in 2021, during operation “Guardian of the Walls,” the system generated 200 military target options for strategic engagement during the ongoing military operation. The system executes this process within seconds, a task that would have previously required the labor of numerous analysts over several weeks.

A third notable system is the “Fire Weaver” system, a novel tool developed by a private company – Rafael. This networked sensor-to-shooter system links intelligence-gathering sensors to field-deployed weapons, facilitating target identification and engagement capabilities. The Fire Weaver system focuses on processing data and selecting optimal shooters for different targets based on factors such as location, line of sight, effectiveness, and available ammunition. It is aimed at improving the capacity to work simultaneously with various players operating in concert, in order to promote precision, minimize collateral damage, and mitigate the risk of friendly fire.

Finally, a recent 972+ media report reported that the IDF deployed an AI system named “Lavender,” which allegedly played an important role especially during the early stages of the 2023-2024 Israel-Hamas conflict. This system is designed to mark possible suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad as a potential target. The report by 972+ report suggested that human verification in this context was allegedly restricted to discerning the gender of the target, with an average duration of 20 seconds per target before an attack, and in addition the report stated that the system was mistaken in approximately 10% percent of cases. If accurate, this is alarming, and indeed in a recent post, Agenjo suggested that such a scenario raises concerns regarding potential violations of human dignity through depersonalizing targeted individuals and by circumventing human involvement in the targeting process. 

Still, it should be noted that the process mentioned in the 972+ article is a very preliminary one in the chain of creating, and authorizing, a military target. This is since the decision made by an intelligence officer is later delivered to a target room – in which legal advisers, operational advisors, engineers, and more superior intelligence officers revise the suggested target before approving it (and, at times, reject them). The use of “Lavender“ is hence limited to the intelligence gathering phase, after which the suggested insight still needs to be verified in the target room by, inter alia legal advisers, that will evaluate if a target should be attacked based on considerations of distinction, proportionality and other applicable IHL rules. 

Challenges Associated with AI on the Battlefield

The UN General Assembly has recently expressed its concerns regarding the emergence of new technological applications in the military domain, particularly those associated with artificial intelligence, which pose serious challenges “from humanitarian, legal, security, technological and ethical perspective”. A pivotal concern arises regarding the appropriate level of human involvement required or necessary in decision-making processes (in/on/off the loop). This concern raises an issue of importance for three crucial purposes: improving accuracy and precision in decision-making, enhancing legitimacy, and ensuring accountability. This is especially important in the context of AI systems deployed for targeting, such as Lavender, the Gospel, Fire Weaver, and Fire Factory. 

It seems that, as of today, there is no legal justification for a situation in which an AI system autonomously targets individuals without human involvement at all, as the threshold placed by IHL is that of the reasonable military commander (namely, a human commander that is evaluated by standard which do not apply to a computerized AI-based system). Accordingly, the ICRC noted that preserving human control and judgment is essential. As per Israel’s approach, currently the IDF commander is the one holding the ultimate decision-making authority when it comes to targeting. Notwithstanding, there has been recent criticism regarding the extent and effectiveness of human involvement in the target selection process by systems like the Gospel and Lavender (see for example here and here). 

In this context, it is worth noting the customary principle of precautions. This principle mandates the positive obligation for those planning an attack ‘to do everything feasible to verify‘ the military nature of individuals or objectives. This principle also entails the duty of constant care, which requires that in the conduct of military operations, constant care shall be taken to spare civilians and civilian objects. 

Indeed, the rapid pace of generation of targets by an AI system, coupled with the commander’s limited time for a comprehensive review, raises concern that this situation may fall short of the obligation to exhaust all ‘feasible’ means to avert harm to civilians and may not align with the duty of constant care. It was suggested in a recent blog published by Opinio Juris that if military personnel are unable to “delve deep into the target”, it is difficult to see how these systems contribute to compliance with the principle of precautions and the duty of constant care. The blog further claimed that in general the speed and scale of production of targets by such AI systems coupled with the complexity of data processing may make human judgment impossible or meaningless. However, it should be recalled that the formal IDF’s stance on this matter seems to address this concern. As noted, the utilization of targeting AI systems by the IDF like the Gospel and Lavender is confined to the intelligence gathering phase, in the early stages of the “life cycle” of a target, in the sense that later stages include corroboration and oversight over the intelligence gathering and evaluation stages, including review by legal advisers, which verify not only the factual assertions made, but also the appropriateness of an attack in terms of distinction, proportionality, precautions, and other relevant rules of international law. Indeed, the IDF clarified that the selection of a target for attack by the Gospel will undergo an additional separate examination and approval by several other authorities (operational, legal, and intelligence), out of a desire to ensure meaningful human involvement in targeting decision making processes. 

Another concern relates to the explainability issue, or the “black-box” phenomenon. The incapability of an AI-based systems, as a general and inherent bug of the system when it comes to AI, to provide a clear and comprehensible explanation of its decision-making processes can hinder investigations of military incidents, and as such impact accountability and also inhibit on the ability to minimize the risk recurrent mistakes. In this context, the IDF clarified that regarding the Gospel system, it provides the intelligence researcher with accessible and understandable information upon which recommendations are based, enabling independent human examination of the intelligence material. Another notable challenge, related to this one, is the phenomenon called “automation bias“. Automation bias refers to the tendency to over-rely, or over-trust, the AI output. While IDF commanders can choose to disregard recommendations from the Gospel, it is nevertheless challenging to avoid automation bias, especially during intense hostilities which require decision-making in an accelerated pace and under constant pressure to act. 

The Way Forward – Review of New and Emerging Technologies is a Must

States are limited in their choice of weapons and means or methods of warfare. In order to verify that new capacities are in line with international law, States are required under Article 36 of the First Additional Protocol to the Geneva Conventions (AP I) to evaluate new weapons, means or methods of warfare prior to their deployment in practice. Of relevance for our discussion, “Means of warfare” is a broad term, extending to military equipment, systems, platforms, and other associated appliances used to facilitate military operations. It seems that tools deployed for offensive actions like the Gospel, Fire Factory, Fire Weaver, and Lavender, constitute a new means of warfare that ought to be subject to a legal review under Article 36.

While Israel is not a party to AP I, and there is a discussion as to the customary status of Article 36, it is important to recall that General Comment 36 of the Human Rights Committee took the approach that ensuring the protection of the right to life invites prophylactic impact assessment measures, including a legality review for new weapons and means of warfare. Such a review should be conducted in three stages

First, it is important to determine whether the use of a particular means of warfare is prohibited or restricted by a treaty or under customary international law. Regarding military AI tools, the State of Israel has not ratified a treaty specifically prohibiting the use of AI technology in general or in military applications – as there is none at this present stage. Furthermore, it seems that there is currently no customary prohibition on the deployment of AI in military contexts (other than general principles of IHL, like distinction, and specific rules, like ones safeguarding protected sites).

Second, there is a need to determine whether employment of the system might infringe on general prohibitions under international law (like protection of the environment). AI tools deployed by the IDF like the Gospel, and Fire Factory, do not infringe directly prohibitions, as they merely support decision-making processes. 

Third, the State must consider the means of warfare in light of the ‘Martens Clause’, which underscores the need to consider ‘the principles of humanity’ or ‘the dictates of public conscience’. In the Legality of Threatening or Using Nuclear Weapons, the International Court of Justice affirmed the Martens Clause’s importance as an “effective means of addressing rapid evaluation of military technology”. 

We suggest prohibiting scenarios in which AI systems make autonomous decisions without human input. This is required given the fact that the legal standard of behavior in IHL refers to a human decision maker, and also when considering developments in other fields, and most notably human rights (where discussion over the right to a human decision maker is on the rise). In particular, we advocate for the insistence on meaningful human involvement in the utilization of AI systems by the IDF, which includes, among other measures, a separate approval of another authority in the chain of command, along with additional examinations in the target room by legal advisers and additional experts. This mechanism should always be the case, without exceptions. Also, as discussed above, it is important to try and provide explanations to the people operating the systems, in order to avoid a “black-box” situation in which it is impossible to supervise the systems and properly question their suggestions (when needed) and mitigate concerns regarding possible bias as well.

Concluding Thoughts

There is room for prudence when deploying new military capabilities. First, it is critical to evaluate the legality of new technologies through impact assessment measures – for example under Article 36 to API. Second, the designers and operators must be conscious of inherent risks like the lack of explainability and biases. This done not mean that we suggest to attribute criminal accountability to designers, as the ultimate decision maker is the military commander, but we believe that the more the designers will understand the way the limitations of the system might inhibit on its operation, the better they will be able to deal upfront with concern and hopefully also alleviate them. Training of operators of the system, and those who rely on it, is also key, and it must include technical, ethical and legal aspects. 

In a world that is becoming more divided in ideals and values, the possibility to find a common ground on the road ahead will be pivotal not only for the battlefield of the future, but also for the maintenance of international peace and security. We are living through a volatile time, that includes quantum leaps in terms of technology, and the need to be able to rise above narrow interests and considerations is as important as it has ever been. 

We believe that initiatives like REAIM and the “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy” should lead the way ahead – to a path of cooperation, of common ground, and of a future in which States will focus on the welfare of humanity rather than on dystopian technological-driven violent clashes that will be undertaken without a proper legal framework or benchmark to regulate them. In order to move forward to the creation of a framework for the regulation of AI in the battlefield, there is a need to engage not only states but with all affected stakeholders. The multi-stakeholder approach is rooted in the understanding that the involvement of all relevant players in a meaningful and transparent way is required to achieve progress, and ensure legitimacy of norms and institutions. We must recall that development of technology is a complex sphere of operation, in which there is a division of roles and responsibilities among different stakeholders, including States, academia, civil society and the private sector. Any step ahead must be inclusive, transparent and allow for meaningful participation of all of those stakeholders.  

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Autonomous Weapons, General, Middle East, Technology

Leave a Reply

Please Login to comment
avatar
  Subscribe  
Notify of