AI and Machine Learning Symposium: Humanity-Centric AI for Armed Conflict–A Contradiction in Terms?

AI and Machine Learning Symposium: Humanity-Centric AI for Armed Conflict–A Contradiction in Terms?

[Elke Schwarz is a Lecturer in Political Theory at Queen Mary University London and Researcher in ethics and technology. This post is part of our symposium on legal, operational, and ethical questions on the use of AI and machine learning in armed conflict.]

Artificial Intelligence (AI) in armed conflict is often considered under the cluster of ‘emerging technologies’, but the concept and field of study has its origins way back in the 1950s with the pioneering research of figures like Norbert Wiener and Alan Turing. Since then, the field has seen various waves of progress, all linked to the idea of using sophisticated computational techniques to produce a ‘thinking machine’. Spurred on by recent advances in machine learning and neural network technologies, AI has indeed become, as John Brockman suggests, “today’s story – The story behind all stories”. The present shift of focus onto the global pandemic notwithstanding, this is no different for military organisations and the story of war, where the much-touted race for AI superiority is in full swing. At present, advanced militaries are directing their efforts toward crafting and implementing AI strategies for battlefield use and beyond, and the ICRC is right to include AI and machine learning for use in armed conflict as a key challenge in our contemporary landscape.

Those states that have published specific military AI strategies, such as France or the US, typically stress two aspects from the outset: first, that AI is set to change the way wars are fought in a significant manner, and second, that AI will yield tremendous benefits to military organisations across a range of domains, such that those who lead in the field will acquire a significant competitive advantage. Whether and how AI will indeed alter the nature of warfare is at this stage still speculative. That AI may bring considerable advantages in terms of speed, efficiency, and process optimisation is clear. Already it is easy to see how AI can be used to streamline infrastructure and logistical processes, such as supply chain management, optimised communication, or in the preventive maintenance of fighter jets, making unwieldy operations leaner and faster. However, where AI is destined for the battlefield or combat activities, the benefits of AI are not as evident as the swelling chorus of AI advocates might suggest. Particularly pertinent here are questions of whether AI systems can or indeed should be taking on a significant role in critical selection and targeting functions, whether they should be making lethal decisions, be involved in ‘accelerated sensor-to-shooter timelines’, play a crucial role in predictive suspect selection and classification, or otherwise assume decisive powers in areas where the ethical stakes are patently high.

In recent years, these questions have become the subject of heated debate, especially as they pertain to AI systems that employ sophisticated machine learning technology. Ethicists, activists, and many others are now adamant that we come to terms with the implications of AI for human oversight, agency, and responsibility in morally loaded spheres like those of conflict and war, yet efforts to engage with AI ethics in the military domain itself remain scant and under-developed. The US has pioneered a set of ethics principles for military AI use, which were crafted by the Defense Innovation Board (DiB) in October 2019 and adopted by the Pentagon shortly thereafter. This is perhaps a good start, however, as Lucy Suchman notes, there is a worry that the often vague and abstract formulations of the Principles serve as a veneer of ethical propriety, rather than a robust set of limitations for a highly dynamic field of technology for military purposes. Other efforts often note that ethics are important, but a thorough and close engagement with what this could, or should entail, tends to be absent in most AI strategies offered by states. Nonetheless, important efforts are underway, by civil society groups and other non-governmental actors, to stress that, at the very least, the human must retain a meaningful level of control and, as the ICRC suggests “AI must remain a tool that must be used to serve human actors and augment and improve human decision making, not replace them”.

The limits to human control over AI

This is an important warning and crucial aim. A human- and humanity-centered approach to technology is required if we are to safeguard a minimum of dignity and humanity in armed conflict. But realising this will require that we take seriously both the technological and the human limitations inherent to any AI-human ecosystem. As I have written elsewhere, when AI, machine learning, and human reasoning form an ecosystem, the possibility for human control is limited – as humans we have a strong bias in favour of our computing machines, and we often lack the knowledge needed to reason well enough to assert proper control over an action, particularly when the utility of the action depends so heavily on speed. In theory, this may be one day be surmounted through the application of principles for the ethically responsible design of AI applications, but I am wary of putting too much emphasis on this.

As is the case with all technologies, or indeed artefacts, “the potential for harm lodges not solely in the inanimate object, but in the myriad of ways that people interact with them”, as Sheila Jasanoff explains. And this is particularly relevant against the background of advancements in neural network machine learning algorithms, which are deliberately designed to operate beyond the capacity of the engineer, let alone the user, to conceive of the computational process at work in the neural network. Within such technological systems, the human is no longer in full control and instead operates from inside a web of relations that prioritise technological parameters such as speed, optimisation, and efficient decision-making, and within which ideas of good or bad are already always fixed. The human might stipulate the ideal outcome but cannot control or necessarily comprehend the pathway to the outcome, nor are they likely to ‘understand’ the deliberative process, let alone in real time. With complex machine learning through neural networks, we are dealing with technologies that are, as David Gunkel notes, “deliberately designed to exceed our control and our ability to respond or answer for them”. This is further complicated when we consider how, for example, digital interface design practices can imperceptibly mould the user’s experience, knowledge, and sense of time along the lines of technological rationalities.

The moral limits to AI decision making

In assuming that AI can be used ‘as a tool’, I wonder if with this, we risk overlooking an irresolvable tension between the aim and character of AI on the one hand, and its intended use for (ethical) decision-making in armed conflict on the other, even with a human in a position of control. The very logic of AI rests on classification and the codification of life into computable data; it employs modes of ‘thinking’ that are entirely foreign to human deliberative processes. Yet this all stands in stark contrast to the complex, slow, and irresolvable character of ethical thinking. Solving ethically challenging tasks, including the identification of potential targets, with AI, even with the most sophisticated machine learning techniques, constitutes an abdication of that uniquely human task – to weigh, and feel the weight, of a morally difficult decision. Morally relevant decision-making cannot and should not be delegated to machines, nor should we allow such difficult decisions to be obscured by the smooth functioning of technology or the moral relief AI systems might seem to provide in conditions of radical uncertainty during armed conflict. This would be to give up on our humanity in the name of supposed innovation.

Of particular concern in this regard is the dominance of the private sector in designing AI technologies for military use. Today, the Silicon Valley ethos – to move fast and break things – contains little of the caution raised by some of the founders of AI technology, but we would do well to heed the words of Norbert Wiener, father of cybernetics, who warned in the 1960s that coupling together “two agencies essentially foreign to each other” – the human being and the technological system – may herald a future not of progress but disaster. ‘Move fast and break things’ should not become a new military motto.

Print Friendly, PDF & Email
Topics
Featured, General, International Criminal Law, International Human Rights Law, International Humanitarian Law, Organizations, Public International Law, Symposia, Themes, Use of Force
No Comments

Sorry, the comment form is closed at this time.