29 Apr AI and Machine Learning Symposium: Confronting Complexity through Collective Defence
[Nadia Marsan is Senior Assistant Legal Adviser in the NATO Office of Legal Affairs. The views expressed in this article are hers alone and do not necessarily represent the views of NATO or its Allies. This post is part of our symposium on legal, operational, and ethical questions on the use of AI and machine learning in armed conflict.]
The ICRC, in its latest report on International Humanitarian Law (IHL) and the Challenges of Contemporary Armed Conflict, includes artificial intelligence and machine learning (AI/ML) as a field of recent technological development which introduces significant complexities to contemporary armed conflicts. With the development of AI/ML set to become a focus of military capability development, an organization like NATO, built upon the collective defence of 30 member states, could be a valuable forum to address some of the complexities raised by the ICRC.
AI/ML enabled military technology presents a number of operational opportunities and legal challenges. The ICRC notes that ensuring compliance with international humanitarian law is likely to be more difficult to assess with the military use of AI/ML due to the very character of the technology—including the potential loss of human control, a lack of transparency and traceability, increasing unpredictability, as well as a general risk of escalatory effects possibly impacting civilians and civilian infrastructure. These challenges underlie the call for the development of shared parameters that can contribute to ensuring compliance to international law in the military application of AI/ML technology.
Towards Legal Clarity
There is general agreement that the existing body of international law governing relations between states—which includes international humanitarian law—applies to the military use of new technology. This relies on a recognition that this body of law has proven to be sufficiently adaptable to accommodate military innovation in the past. The challenge with adapting to these new military innovations, however, is that much of international law is based on state practice, and not just any state practice but that which demonstrates that the action of the state is carried out as legal obligation. The novel use of AI/ML technology, however, means there is very little state practice to draw on.
Although state practice is rather limited, this does not mean there is a dearth in state perspectives on the use of AI/ML in defence contexts. In fact, NATO Allies are increasingly publishing national strategies on the use of AI/ML in defence, which provide some indication as to how these challenges are being conceptualized and approached nationally. France, for example, has made clear that their use of AI in matters of defence, will abide by three major principles: respect for international law; the presence of sufficient human control; and ensuring responsibility of human command. The United States, meanwhile, has also set out a policy framework for the development of AI in matters of defence. In a recent publication on “AI Principles”, the Defense Innovation Board (DIB) established five principles to govern the military development of AI/ML systems, emphasizing that such development must be: responsible; equitable; traceable; reliable; and governable. The work of individual Allies contributes to the development of policies at NATO as collective endeavors rely on a convergence of views amongst member nations.
Interoperability as an Enabler
There is a need to engage in multinational discussions to assess how these high level principles can be applied in practice. Multinational military cooperation can help define the responsible use of AI/ML in the military while preserving states’ inescapable desire to maintain a strategic edge. Such cooperation is the very essence of the collective defence mission of NATO. In order to effectively implement collective defence, NATO relies on interoperability—the military forces of the 30 Allies must work together with common equipment, agreed terminology, standards and processes, to achieve common goals. With its focus on the setting of common standards and its emphasis on ensuring interoperability of forces, NATO can contribute greater clarity on the development and use of AI/ML enabled military technology. The challenge for NATO Allies will be to maximize interoperability in order to avoid creating a two tier alliance. In AI/ML terms this requires militaries digitizing much of their activities so that AI/ML technologies can be applied accordingly with shared principles and norms embedded from the outset. This will require prioritizing defence investment activities for many Allies.
Underlying this effort towards the interoperability of forces, there is a large diversity of legal views and approaches in core areas of international law and national and regional regulations, with the most relevant example being those governing the use and exploitation of data. Despite these differences, consensus decisions must be legally well-founded – and this can be quite challenging. The interoperability of forces is only possible because the Alliance is established upon a set of fundamental values enshrined in its founding treaty which provide a framework for defence cooperation, enabling a kind of “legal interoperability”. A commitment to the rule of law, to restraint, resilience and mutual assistance and cooperation, are some of the values that have evolved as the cornerstones of NATO decision making and which provide an overarching framework towards achieving greater legal and operational interoperability. In fact, this commitment to the core principles of the Washington Treaty ensures that the legal diversity amongst Allies is a relative strength for NATO – by bringing a wide diversity of cultural perspectives to support and educate the collective action of Allies. Against the background of these shared values, NATO can provide an excellent venue in which both Allies and Partners can collectively and pragmatically work to tackle and clarify the complexities and legal ambiguities arising from the use of AI/ML enabled military technology.