Symposium on Military AI and the Law of Armed Conflict: Artificial Intelligence and the Rupture of the Rationalist Assumption of IHL Decisions – A Move Towards Emotions in IHL?

Symposium on Military AI and the Law of Armed Conflict: Artificial Intelligence and the Rupture of the Rationalist Assumption of IHL Decisions – A Move Towards Emotions in IHL?

[Anna Rosalie Greipl is a Research Assistant at the Academy of International Humanitarian Law and Human Rights where she works for the Digitalization of Armed Conflict projects. She is also a Ph.D. researcher at the Graduate Institute of International and Development Studies.]

Legal debates on the military use of artificial intelligence (AI) systems are replete with questions around the role of human emotions. A key illustration are the debates on regulating the use of lethal autonomous weapons (LAWS). Proponents of LAWS argue that the absence of emotion – anger, fear or rage – renders these systems less error-prone in military decision-making. Critics, however, argue that the inability to feel emotions such as empathy, dignity, or compassion makes the military use of these systems inappropriate.

Irrespective of one’s positioning, the pivotal role played by emotions cannot be denied. However, there has been little engagement by legal practitioners and academics – and possibly even a resistance to engaging – around the role of emotions in these legal discussions. This is arguably a missed opportunity. A better understanding of the role of emotions in discussions about international humanitarian law (IHL) could help address some regulatory challenges surrounding AI today. In particular, the difficulty in regulating the role of humans and AI systems in IHL-related military decisions. 

The Rationalist Assumptions Driving Current Legal Debates 

Emotions are a defining feature of warfare. Any individual taking part in violent encounters – such as armed conflicts – experience intense and complex emotional reactions. IHL is often seen as a framework for mitigating these very same forces in the military decision-making process. Indeed, it is assumed that interpreting the law requires an objective and value-neutral reasoning process to reach an optimal outcome. Sustaining a rationalist assumption, IHL thus imagines a rational human (without emotions) as uniquely suited to make the decisions that arise during armed conflicts. 

However, as the reasoning of the proponents of LAWS showcases, the arrival of AI systems (free from emotions) in contemporary warfare is increasingly challenging the notion that a human decision-maker is best suited to make IHL-related decisions. While opponents of LAWS emphasize the need to keep a human emotional component in these decisions, they build that reasoning on ethical or moral considerations situated outside IHL. 

This continuing difficulty to renounce the rationalist assumption of IHL begs an important question:  Can we truly maintain that emotions play no role in IHL-related decision-making processes in practice? 

Emotions vs Reason – A False Dichotomy?

Recent findings in neuroscience and psychology suggest that we cannot. Quite the contrary, emotions are an essential element of reason. They enable humans to make decisions by helping to choose between competing values at any given moment. Renowned neuroscientist Antonio Damasio, for instance, has demonstrated that brain damage can significantly reduce victims’ ability to experience emotion, which in turn diminishes their capacity to reason and make decisions. The upshot: no reasonable or rational decision-making is possible without emotions. 

Some legal scholars — contributors to the so-called ‘Law and Emotion’ scholarship — are becoming more receptive to these insights. They accept that emotions and the practice of international law are unavoidable intertwined. Moreover, that emotions are not necessarily (or even usually) a harmful influence on legal decision-making.  Emotions drive our behavior in support of reasons and enable rational decision-making. In other words, instead of presuming the ‘destructiveness’ of emotions, we should consider their added value.

Recognizing IHL’s Emotional Reality – A Way Forward

These findings challenge the (false) dichotomy between reason and emotion within current legal discourse. In short, the idea that IHL is best applied without emotions is increasingly questioned, giving way to the idea that emotions are essential in any IHL-related decision. In fact, these decisions often require context-specific evaluative legal assessments. For example, the assessment of the proportionality of an attack demands an evaluative balancing of military necessity and humanitarian considerations that are not amendable to quantification.  

These insights also offer a new perspective on how to address the challenges raised by human-AI interaction in military decision-making.

First, it becomes difficult to view AI systems as a one-by-one replacement for humans on the grounds that they are better at making military decisions. In turn, this refocuses the debate away from the question of how to preserve ‘meaningful human control’ over critical military decisions towards the question of how to preserve ‘human (value) judgment’ in these decisions. This shifts the effort from finding technical solutions for achieving meaningful human control to deciding what decision-making tasks should be delegated to AI systems. 

It also demands recognition that IHL requires complex, context-specific qualitative assessments involving a higher level of moral commitment. This supports the idea of placing strict constraints on the use of AI systems in IHL-related military decision-making tasks. While this may seem to align with views of LAWS opponents, our reasoning means that it is no longer necessary to draw on ethical considerations situated outside IHL. Instead, it is possible to build a strong argument within IHL itself.

This is not to suggest that we should downgrade the role AI systems can play in military decision-making processes. There are myriad possibilities where non-invasive AI systems can support – but not replace – humans in making emotionally complex IHL decisions in extreme situations. Indeed, research is ongoing on how non-invasive forms of AI systems can support humans in regulating their emotions to make better decisions, for example by enhancing awareness and tools for humans take ownership of their emotions. Hence, these AI systems could enhance military decision-making capabilities in compliance with their IHL obligation, ultimately strengthening the protection of civilians and their livelihoods during armed conflicts.

Concluding Remarks

As we have seen, the latest findings in neuroscience and psychology question our attachment to the rationalist assumption that continues to dominate IHL debates. Integrating these findings into legal analysis may offer a pathway, not only to challenge LAWS proponents with an argument built on IHL, but also new ways of imagining the human-AI interaction in military decision-making processes. Developments in other fields suggest that AI systems offer great potential to help human decision-makers channel their emotions in a way that enhances their ability to comply with their IHL obligations. Although there are technical challenges to be addressed, these developments signal a strong potential for the military application of AI systems, without contributing to a dehumanization of IHL and a reduction of accountability in IHL decision-making processes. 

In the end, beyond offering an affirmative perspective on the role that AI systems can play in military decision-making processes, I invite us all to recognize the vital role of emotions in human life and within IHL itself. Ultimately, what we should be most afraid of are IHL-related military decision-making processes without emotions. 

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Featured, Symposia, Technology, Themes
No Comments

Sorry, the comment form is closed at this time.