AI and Machine Learning Symposium: Why Detention, Humanitarian Services, Maritime Systems, and Legal Advice Merit Greater Attention

AI and Machine Learning Symposium: Why Detention, Humanitarian Services, Maritime Systems, and Legal Advice Merit Greater Attention

[Dustin A. Lewis is the Research Director at Harvard Law School Program on International Law and Armed Conflict. This post is part of our symposium on legal, operational, and ethical questions on the use of AI and machine learning in armed conflict.]

I am grateful for the invitation to contribute to this online symposium. The preservation of international legal responsibility and agency concerning the employment of artificial-intelligence techniques and methods in relation to situations of armed conflict presents an array of pressing challenges and opportunities. In this post, I will seek to use one of the many useful framings in the ICRC’s 2019 “Challenges” report’s section on AI to widen the aperture further in order to identify or amplify four areas of concern: detention, humanitarian services, uninhabited military maritime systems, and legal advice. While it remains critical to place sufficient focus on weapons and, indeed, on the conduct of hostilities more widely, we ought to consider other (sometimes-related) areas of concern as well. Drawing on research from an ongoing Harvard Law School Program on International Law and Armed Conflict project that utilizes the analytical concept of “war algorithms,” I will sidestep questions concerning the definitional parameters of what should and should not be labeled “AI.” (A “war algorithm” is defined in the project as an algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict.) Instead, I will assume a wide understanding that encompasses methods and techniques derived from, or otherwise related to, AI science broadly conceived

The ICRC’s report helpfully brings attention to the fact that “[t]he possible uses of [AI-related] ‘decision-support’ or ‘automated decision-making’ systems are extremely broad: they range from decisions about whom – or what – to attack and when, and whom to detain and for how long, to decisions about overall military strategy – even on use of nuclear weapons – as well as specific operations, including attempts to predict, or pre-empt, adversaries” (emphasis added). Building on that insightful premise, at least four thematic and functional areas have, in my view, received too little systematic attention to date, not least among — and within — governments but also (with the exception of some important early analyses) by civil society, technologists, scholars, and the ICRC itself.

1. Detention

As noted above, the ICRC’s 2019 “Challenges” report contemplates the potential that AI methods and techniques might be used to help parties to armed conflict decide whom to detain and for how long. Some background: actuarial and algorithmic techniques and methods purport to afford more accurate predictive capabilities and more objective and consistent risk assessments by complementing human evaluations with statistical approximations. Yet any rush to adopt these technologies for armed conflicts ought to be tempered by the manifold legitimate concerns regarding them.

Initial analyses of possible challenges and opportunities pertaining to the field of international humanitarian law (IHL) draw on experiences in certain domestic criminal-law systems. With varying degrees of success and failure, those examples have utilized data-shaped risk assessments, including in bail, sentencing, and parole. Concerns around the quality, representativeness, understandability, and reviewability of the underlying data and algorithmic processes — to say nothing of applying computational components as proxies for legally relevant criteria more generally — are front and center in these debates.

Several of the issues arising in these domestic criminal-law contexts seem likely to be exacerbated if this suite of technologies is adopted in relation to armed conflicts. For example, ongoing systematic collection and analysis of data and generation and application of efficient and equitable algorithms may present particularly difficult challenges amid conflict. Furthermore, to raise two doctrinal examples, IHL envisages assessments as to whether a protected person may be interned or placed in assigned residence either in an international armed conflict because “the security of the Detaining Power makes it absolutely necessary” or in a situation of occupation because the Occupying Power deemed it that it was “for imperative reasons of security.” Transposing those evaluative decisions and normative judgments partially or fully into algorithmically generated assessments by way of data-shaped probabilities in concrete cases presents a far-from-frictionless exercise, to say the least. Moreover, the rules for detention relating to non-international armed conflict are, in many respects, even less clear than their international-armed-conflict counterparts. That would appear to leave larger space for those employing AI technologies to rely upon potentially problematic domestic criminal-law examples without enough international law to guide them.

For now, it remains uncertain whether (and, if so, under what conditions) computational procedures may on the whole help facilitate or impede — or perhaps transform — respect for IHL and other applicable international legal provisions on deprivation of liberty in armed conflict. In the meantime, as more and more States consider employing these technologies, it is important to consider these potential concerns now.

2. Humanitarian services

As with so many thematic or functional areas, AI techniques and methods may also entail certain promises but also perils concerning humanitarian services. The fundamental aspiration for the technologies is that they might be employed more efficiently, equitably, and effectively in processes aimed at forecasting, identifying, and prioritizing populations and individuals in need and arranging and providing assistance to them. Under this optimistic view, AI techniques and methods might not only help realize a faster and more extensive provision of humanitarian aid but also, in doing so, help guard against — as several IHL provisions require — “adverse distinction” in identifying and assessing needs and in delivering those services.

But several of the assumptions underlying this aspiration might not withstand scrutiny. For example, the potential utility of current machine-learning technologies is often characterized as hinging in large part on enough data, well-specified models, efficient algorithms, and sizeable compute power. Yet developing and adopting reliable, representative, and equitable data assemblages is difficult even in the relatively easiest of circumstances. And this set of tasks will almost certainly be more difficult, perhaps infeasible, in armed conflicts, at least in light of current technological affordances and limitations. Additional concerns may arise around the potential distance between the entities developing the technologies (many of which are based in Silicon Valley or other geographic locations far removed from hostilities) and the affected populations, as well as around the range of possible issues for the rights of the members of those groups (including for privacy).

In sum, it seems that, at a minimum, considerable caution is currently called for before employing AI technologies for humanitarian services. But at the same time, it does not necessarily seem warranted to exclude the possibility that these technologies might afford the potential to help realize humanitarian objectives in this area.

3. Uninhabited military maritime systems

Significant recent technological developments in autonomous navigation in maritime environments raise a range of critical concerns about uninhabited military maritime systems (UMMS), not least where those systems incorporate an automatic-target-recognition-and-attack capability. These concerns may be especially acute when seen against the backdrop of several existing and potential legal disagreements among States.

From a legal perspective, a fundamental issue concerns whether UMMS should be characterized as warships, torpedoes, naval mines, or something else. Under the definition set out in the United Nations Convention on the Law of the Sea of 1982 (and, at least arguably, in its customary-law counterpart), warship status is conferred based in part on the ship both being “under the command” of a duly commissioned government officer and being “manned by a crew which is under regular armed forces discipline.” As a recent article explains, unilateral attempts to expand the definition of warship — or, indeed, any ship or vessel — to encompass certain UMMS may be resisted by many coastal States. Such disagreements might (among other things) raise challenges concerning the exercise of navigational rights or belligerent rights for warships that have long been recognized as meeting that treaty definition. Another set of questions relates to whether an UMMS armed with warheads and the capability to detect and launch an attack against a target qualify as a torpedo. If so, the UMMS would need to sink or become harmless once it has missed its target or run its course. As for naval mines, it is uncertain whether existing legal regulations — which focus largely on how militaries employ the weapon, rather than on the characteristics of the weapon itself — extend to a maritime platform that incorporates explosives such that the platform becomes a weapon. The definitional stakes here are high: according to the International Court of Justice, the presence of a sufficiently large number of naval mines without the consent of the coastal State could constitute (among other things) an unlawful use of force.  

Considering current technological trajectories and the potential for consequential legal disagreements among States, it may be important to settle the legal characterizations of UMMS, perhaps especially before an armed conflict involving this set of diverse systems may break out.

4. Legal advice

Finally, it seems that the practice of IHL itself will not escape the reach of current AI technologies, at least for parties with access to certain advanced technologies. Developments in such areas as computational models of legal reasoning and representation of legal concepts in ontologies and type systems might entail (further) significant — indeed, possibly profound — consequences for the practice of law, including IHL.

The underlying notion is that by making models and feeding law-related data into them (such as case law, court submissions, and legal memoranda), a computer system could, in effect, generate assessments that assist in at least aspects of the exercise of legal judgment and the formulation of legal advice. One key concern is whether the role of lawyers will in practice devolve to validating decisions arrived at by others through computational procedures. As far as I can discern, few, if any, entities and individuals involved in the provision of IHL-related legal advice are currently racing to implement these technologies.

But even if IHL legal advisers never systematically adopt innovations in legal analytics and related technical fields as such, that does not mean that data-shaped algorithmic assessments will not significantly influence the practice of IHL. Indeed, in many important respects, they already have. (Whether AI as such is at issue here turns in part on the definitional scope of AI.) For example, as a recent dissertation illustrates, several militaries have long sought to give practical effect to certain IHL-mandated assessments in part by drawing on computational analyses. These include assessments concerning various aspects of the conduct of hostilities — including as part of the “targeting cycle” — related to various IHL provisions, including on distinction, proportionality, and precautions in attack.

In closing, in a 2015 United States domestic case, admittedly in a field substantively far removed from IHL, an appeals court reasoned that “an individual who […] undertakes tasks that could otherwise be performed entirely by a machine cannot be said to engage in the practice of law” (emphasis added). Perhaps that notion might afford a useful point of departure for further analysis for those of us concerned about the practice of IHL, now and in the future.

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, International Humanitarian Law, Law and Sustainability, Law of the Sea, Organizations, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.