15 Nov “How Do You Feel Today?” Exploring IHRL and IHL Perspectives on Law Enforcement and Military Uses of Emotion Recognition Technology
[Francesco Paolo Levantino is a PhD Candidate in International and European Human Rights Law at Sant’Anna School of Advanced Studies (Pisa, Italy).]
Introducing “Emotion Recognition”
Among many other innovations, Artificial Intelligence (AI) has paved the way for “inferring” human emotions by the automated analysis of physical, physiological or behavioural characteristics. The purported capabilities and potential applications of emotion recognition technology (ERT) are not limited to the detection of “basic emotions” such as anger, disgust, fear, happiness, sadness, and surprise, but also encompass the broader analysis and categorisation of human conducts. From this perspective, the way we walk or talk, the movement of our eyes – or a combination thereof – “reveal something” about physical and/or mental states, interests, or tendencies. The fact that deployments of such technologies hold great promises in the field of security does not come as a surprise.
The same applies to the human rights implications of ERT, which have sparked heated debates on the legitimacy of its use, particularly by law enforcement agencies (LEAs). In Europe, both the European Union (EU) and the Council of Europe (CoE) are working on regulatory tools for AI. Particularly at the EU level, a complete ban on the use of ERT by LEAs is currently under consideration. However, both instruments seem to foresee forms of “national security/defence exceptionalism”, and the precise relevance of these provisions for such purposes still presents some degree of uncertainty. This blog post seeks to provide a swift analysis of key points linked to the deployment of ERT in law enforcement and military domains, taking into account the respective peculiarities of international human rights law (IHRL) and international humanitarian law (IHL).
Emotion Recognition in Law Enforcement: Between Public Security and Individual Rights
To stress that LEAs are public authorised bodies entrusted with maintaining public order and security while upholding human rights seems here crucial. In this sense, as state agents, LEAs have “negative obligations to respect” human rights, avoiding any undue interference, and “positive obligations to protect” from horizontal violations. Still, what is sometimes overlooked is the fact that respect for human rights represents an essential component of the notion of public order.
As urban infrastructures digitalise, near-future deployments of ERT in public spaces would raise crucial legal and ethical questions. The primary interferences that come to mind are with the right to privacy and the protection of personal data. Mitigating these interferences would not be an easy task, especially considering possible uses related to the prevention, detection, and investigation of crime. For instance, one of the main envisioned uses is general monitoring; LEAs could be in the position of “anticipating a crime” or “following its development step-by-step” whenever ERT or similar tools flag an individual or situation as “suspicious”. Through pre-emptive interventions, criminals could be caught in flagrante delicto and even the commission of heinous crimes, such as terrorist attacks, could be prevented. Yet, similar actions would probably involve the indiscriminate collection and processing, at least, of biometric-based data – category to which emotional data would belong at this stage. These practices intuitively infringe upon privacy rights, potentially leading to discriminatory outcomes resulting from the automated analysis of certain “suspicious emotional responses”. Additionally, the detection of “suspicious emotions” or behaviours that do not fit pre-programmed patterns may also involve derivative interferences or violations of other fundamental rights.
In general terms, European data protection law, the caselaw of international courts and monitoring bodies – or other authoritative interpretations of privacy rights – all prohibit or strongly restrict the processing of biometric data in such areas, unless it is strictly necessary and subject to appropriate safeguards. However, were deployments of ERT in public spaces not involving the processing of what is today technically defined as biometrics, in many instances, such uses could still be considered as not necessary nor proportionate. Moreover, it could be argued that the processing of “human emotions” substantially infringes upon the very core values underlying the essence of the rights to privacy and data protection, which under IHRL are not subject to restrictions. Personal autonomy, self-determination, the freedom to have a “private public life”, and spontaneous interactions with others play a crucial role in democratic societies, and their constraint due to constant “emotional surveillance” can restrict the unconditioned enjoyment of freedom of thought, expression, and of peaceful assembly – among others. Currently, there is significant uncertainty on whether these tools are effective, reliable, and scientifically valid. Even if this were not the case, their outputs, i.e. “emotional data”, could be misused or manipulated – e.g. to target, marginalise, or criminalise certain segments of the population or discredit political opponents, activists, investigative journalists, and other “troublemakers”.
Within this domain, one of the primary drawbacks remains that existing data protection laws do not seem to classify emotional data as biometrics, resulting in the lack of corresponding guarantees. Yet, to recognise the need for specific protection and classification for this unique form of personal data is essential.
AI and Emotion Recognition in Armed Conflicts: Balancing Privacy and Military Objectives
Although with certain possible variations, both LEAs and armed forces must respect IHL and IHRL in the context of armed conflicts. In this field, emotion recognition-based intelligence gathering involves several considerations. With in mind what has been presented so far in terms of possible interferences with IHRL, the fact that IHL does not explicitly address privacy rights as the former is worth highlighting. Only in a few instances, certain provisions such as Arts 27 of the Geneva Convention (IV), 75 AP I, or 14 of the Geneva Convention (III) mention formulations somehow connected to the protection of “honour and family rights”, which partly resemble those IHRL grants to the protection of the “right to private and family life”. More specific to data protection are then those views considering the protection IHL grants to “medical services” extendable to both medical and other kinds of data. On this point, the fact that biometric-derived data are increasingly utilised in health assessments, e.g. to predict the risk of stroke, seems of particular relevance. Similar reflections might more easily bring the association of sensitive biometric-based data to the discipline concerning the protection of medical records and data.
Having just simply sketched a few entry points on privacy and armed conflicts, there might be further reasons to consider that privacy concerns should be taken more into account when at war. And these relate to the effects that extensive and intrusive surveillance practices can have on civilians and local populations affected by a conflict. In this sense, possible interferences with privacy rights should also be part of the “legal review” Article 36 AP I requires for the introduction of new weapons, means, or methods of warfare. ERT could be considered as an equipment/system ‘used to facilitate military operations’ – i.e. a means of warfare, whose compatibility with ‘any […] rule of international law’ has to be verified. However, leaving aside these issues for a moment, other reasons could instead justify the deployment of ERT on the battlefield.
As suggested by cinematographic advertising videos on advanced Unmanned Vehicles (UVs), the combination of computer vision and a broad array of sensors allows efficiencies in various tasks performed by armed forces. The deployment of UVs equipped with tools capable of analysing the context of a given operational environment, thanks to the detection of human-to-human interactions and other bodily cues, could help in labelling ambiguous situations as impracticable for an attack due to the significant presence of civilians, or simply as “not dangerous”. By way of illustration, in the aftermath of an explosion in an urban theatre of war, the recourse to ERT could allow the label of a group of people running on a street as “frightened civilians”, instead of having them erroneously considered as “an incoming threat”.
In the near-future, similar tools could provide valuable help in respecting the basic principles governing the law of armed conflicts. The reference is to the distinction between legitimate military targets and protected person, the principle of proportionality, and that of precautions. If it is true that the protection afforded to civilians is not “absolute” in the sense that civilian casualties are tolerated when proportionate to considerations on the military advantage pursued by a certain action, advanced forms of “emotional intelligence” could catalyse adherence to IHL provisions. Commanders could better perform their obligations of doing ‘everything feasible’ to ‘verity that the objectives to be attacked are not’ subjects protected under IHL, by choosing ‘means and methods of attack with a view to avoiding, and in any event minimi[s]ing, incidental loss of civilian life, injury to civilians […]’. The fact remains that the deployment of instruments based on the processing of biometric and biometric-based data would imply significant interferences and possible violations of privacy rights. In addition, not always the specific characteristic of a battlefield would allow commanders to deploy avant-grade devices prior to an attack – this aligns with those interpretations of Article 57 AP I, which see in the expression ‘everything feasible’ a context-dependent and variable due diligence obligation. Most importantly, all that preceded took – once again – for granted the reliability of such instruments, something that, given the implications their deployment could give rise to, ought not to be underestimated.
Taking “Emotional Security” Seriously: Conclusions?
The use of ERT and similar tools in law enforcement and military operations raises complex legal and ethical issues. Balancing the need for public safety and security with the protection of human rights is crucial in determining appropriate modalities for the legitimate design and deployment of these technologies. Privacy rights must be respected, and potential discriminatory and chilling effects should be carefully considered and prevented. Similarly, in armed conflicts, privacy concerns and the safety of civilians should be prioritised by ensuring compliance with IHL and IHRL. Yet, in the context of military operations the possibility of these tools catalysing adherence to IHL to protect human lives and prevent suffering might override some IHRL considerations. As these technologies continue to advance, to have policymakers, academia, and society as a whole engaged in discussions to establish comprehensive frameworks that uphold human rights while harnessing the potential benefits of ERT seems essential. Currently, even more basic biometric identification systems, such as Facial Recognition Technology (FRT), raise significant concerns – irrespective of their use in peace or wartime.
Are we ready to have the most intimate part of our “being human” constantly exposed, potentially manipulated by proprietary algorithms with potential serious impacts on fundamental rights, democratic systems, and pluralistic societies? If so, for what purposes and in favour of which “security domain”?
Sorry, the comment form is closed at this time.