11 Oct The Use of AI at the ICC: Should we Have Concerns? Part I
[Gabrielle McIntyre is Chair of Women’s Initiatives for Gender Justice, Co-cordinator of Africa Legal Aid’s Gender Mentoring Programme for International Judges; Independent international law consultant. Nicholas Vialle is a Pro Bono Lawyer (human rights, refugee and migration law), Australia; Independent international human rights law consultant.]
The explosion of Artificial Intelligence (AI) systems and corresponding evidence of significant efficiencies and innovation in workplaces has unsurprisingly led the International Criminal Court (ICC), long criticized for inefficiencies in work practice and motivated by a philosophy of continued improvement, to increased reliance on new technologies in its work. In February 2023, the Office of the Prosecutor (OTP) announced Project Harmony, which seeks to modernize the evidence management platform used by the OTP. A significant aspect of the project relies on the use of AI technology. The OTP has reported that AI will be used for rapid pattern identification, automatic translations, facial identification, image enrichment, translation and transcription of media files, targeted searches of source material, video, and image analytics, among other uses.
In the OTP’s Annual Report of December 2022, the Prosecutor provides further details of its intended use of AI to facilitate the efficiency of its proceedings. It reports that it is working to develop “cutting edge solutions for the analysis of large volumes of digital data” and in partnership with Microsoft and Accenture will develop “data enrichment tools and AI which will be used to automatically transcribe and translate text from video and audio files collected through its investigative activities”. These tools will facilitate investigators interacting directly with source material through targeted searches and using AI will enhance the ability of the OTP to “identify relevant individuals, objects and locations”. In addition, automated transcription and translation tools will be used to “support the rapid filtration of incoming information” and allow for the easy filtering of irrelevant information allowing relevant actors to focus on “the most probative and relevant information filtered through e-Discovery software”.
The OTP’s Strategic Plan for 2023-2025 details how the OTP aims to utilize the use of new technologies to revolutionize its work practices further. Strategic Goal 3 is entitled “[m]ake the Office a global technology leader”. In that regard the OTP sets out its vision of using technological tools to “enhance its ability to draw on digital, documentary video and audio material”, refers to developing relationships with holders of large data, and building the technical expertise of its staff. Not only will the OTP ensure the establishment of a cloud based e-discovery platform but data enrichment tools, AI and machine learning will be supported by “a dedicated e-Discovery and Data Analysis Unit” and “effective operational relationships will be established with key partners capable of providing large data sets (video and audio material, cell site data etc) including social media companies, NGO’s and academic institutions”. It is proposed that this will ensure that “the Office is able to hold the widest range of digital evidence globally in relation to international crimes” which will expedite its ability to identify materials relevant to its investigations and those being led by domestic authorities. In addition, a new information management system will be adopted to accommodate “the ingestion and collection of large data sets” and engagement with relevant partners will be increased to strengthen the OTP’s execution of its disclosure obligations. The OTP further envisages the use of digital evidence to modernize its evidence presentation during trials. To assist the OTP in becoming a global technology leader, the Technology Advisory Board will be reconstituted to provide advice and support to the goal.
The OTP’s visionary overhaul of its work practices with AI to improve the efficiency and effectiveness of its work, and by consequence the work of the ICC more generally, is to be applauded. In that regard, given the rapid advancement and use of AI tools in most other sectors it appears that the OTP has embraced an inevitable step if the ICC is to keep pace with modern work practices, maximize efficiencies and meet the expectations of the Assembly of States Parties that the ICC take all measures within its power to do so. However, there are two important points to highlight about the development of AI which may have implications for the use of AI generated evidence, or evidence analyzed by AI at the ICC. The first is that AI has largely been developed by technical companies outside of the framework of human rights and the second is that AI has been primarily developed by white men in environments where women are discriminated against with impunity. Both factors have contributed to the development of AI systems that breach human rights and in particular rights to privacy, data protection, equality, and non-discrimination, especially on the grounds of gender, which may well have implications for the use of AI tools at the ICC.
Recognizing the growing evidence of human rights violations in the development of AI tools and the potential discriminatory outputs of their deployment, the proposed European Union Artificial Intelligence Act (EUAI Act), which seeks to regulate the development and/or use of AI tools in the European Union, requires AI’s developers full compliance with Data Protection laws and prohibits AI tools that have discriminatory outputs. This is consistent with former United Nations Commissioner for Human Rights’s call for AI applications that cannot be used in compliance with international human rights to be banned. Notably, the European Parliament had flagged the use of AI systems in criminal court proceedings as high risk to human rights and the EUAI Act adopts this assessment and imposes strict requirements on some of the uses of AI in the criminal justice sector.
Given this backdrop, what appears to be missing from OTP’s fanfare for the adoption of improved technologies is consideration of whether the use of AI is consistent with human rights and the commitment of the Rome Statute to uphold them. In none of the announcements or publications made by the OTP is there an indication that the OTP has considered these issues in the development or adoption of AI technologies or given thought to the necessity of the development of a regulatory framework to ensure AI’s compliance with human rights standards at the ICC. More surprisingly, there also appears to have been little attention paid to these issues within the broader international criminal justice community. To the best of our knowledge, there has been no in-depth evaluation of the human rights implications of the OTP’s commitment to the use of AI at the ICC, other than to the related issue of OTP reliance on user generated digital evidence. This is curious given the focus normally given to any significant development at the ICC; and arguably this development is more than significant – it is revolutionary.
This silence may well stem from the fact that international lawyers are rarely also tech experts and as such are not well placed to identify, let alone critique, the potential human rights implications of the technological changes being implemented by the OTP. Nonetheless, as we are all invested in the success of international criminal justice, and the ICC in particular, we have a responsibility to understand new technologies being applied by the ICC so that not only the advantages may be appreciated but the potential pitfalls properly assessed.
With the strong caveat that we are international lawyers and by no means experts in the field of AI, in the following we attempt to explore the purported use of AI by the OTP and to highlight some of the issues that we consider could well warrant in-depth consideration.
There is no agreed definition of AI but put simply AI is the capacity of a computer to perform tasks typically associated with human beings (for more detailed discussion see Grimm, Grossman, and Cormack). AI allows a computer to act and respond almost as if it was human and it is designed to do what humans do much more efficiently. AI algorithms are trained using large datasets so that they can identify patterns, make predictions, recommend action, and figure out what to do in unfamiliar situations, learning from new data and improving over time. The more data an AI system has access to the more it can learn and refine its algorithm. While all AI uses large amounts of data, in many instances the dataset an AI is trained on, or how the dataset was selected or filtered to train the AI will not be made public.
The ability of an AI system to improve automatically through experience is known as machine learning and it is the machines’ capacity to learn to do tasks better by itself, rather than simply following instructions that distinguishes AI from traditional computer programs. Even so, an algorithm is only going to capture what it is told to look at, thus how the algorithm is written and what database it has access to will impact the accuracy of the outputs of the AI system.
The computational process of AI takes place inside a system we cannot see, much like the human brain, and the way the algorithm interacts with the data to give an output is opaque – the so-called AI black box. While the original algorithm may be accessible to the developer as the machine learns from its users, changes will be made to its algorithm over time and it is generally not possible for even the developer to understand what computational process has been adopted by the algorithm to reach any output given.
While in essence AI as a broad concept is a series of codes which are developed by tech companies or individuals to analyze data AI is more than just code and data. It involves theories, models, methods, applications, and impacts. In that respect, it is the developers who will make innumerous decisions in the creation of an AI system including what model to develop for what purpose, which data is fed into the algorithm to initially train the AI, decisions on how the code will interact with data and on how the algorithm will receive feedback.
On its face many of the applications of AI identified by the Prosecutor appear non-controversial, yet those ostensibly non-controversial AI tools may well be premised upon widespread human rights violations. Privacy and data protection are fundamental rights under the international legal order and as AI has developed outside a human rights framework the holders of large data sets with whom the Prosecution intends to partner may have disregarded these rights in the collection of that data. It is well known that most large tech companies have scraped their datasets from the internet without regard for the human rights implications of doing so. For example, CHATGPT, Open AI and Microsoft are currently facing a plethora of legal challenges alleging the stealing and misappropriation of vast swaths of people’s data from the internet. Recalling in particular the OTP’s intention to use facial recognition tools, Clearview’s AI’s provision of facial recognition technology for law enforcement based on a data base of 10 billion images gleaned from the internet has been found in breach of privacy laws in several countries, including Australia, Canada, France and the United Kingdom. Further, in a number of national jurisdictions the Government’s use of data for the deployment of AI tools has been found to have unlawfully breached privacy law and caused harm to its citizens. Given the prevalence of human rights breaches in the collection of data there is concern about the consistency of the OTP’s use of outputs generated from AI data systems that may have infringed these rights with the Rome Statute’s commitment to respect human rights.
From the information publicly available about the development and use of AI by the OTP it is unclear what consideration has been given to ensuring that human rights have been respected in the gathering of large data sets that may be used by the OTP in the development or use of AI tools. Additionally, we have been unable to locate, when searching the ICC’s website, any Data Protection framework adopted by the OTP or the ICC despite this being a common feature of other international organizations, and specifically called for in the Resolution of Data Protection and International Organizations adopted in 2003 by an International Conference of Data Protection and Privacy Commissioners. While according to Microsoft Bing, the ICC is a signatory to the European Union Convention 108 on Data Protection, reportedly signed by the ICC on 23 June 2023 and ratified on 1 July 2023, we could find no evidence to verify this claim nor could Bing provide an accurate reference, suggesting it was a hallucination – a not uncommon feature of AI systems. Absent any policy, there is also no indication whether the OTP has given any thought as to whether tech companies breaches of human rights in developing AI tools would thus render OTP reliance on them inconsistent with the Rome Statute. That is, whether breaches of privacy or data protection laws by tech companies in the development of AI tools exclude evidence that is output by OTP use of these tools before proceedings at the ICC.
In terms of admissibility of AI outputs, it is noted that Art. 69(7) of the Rome Statute sets a relatively high threshold to exclude evidence that is obtained in breach of international human rights. It states:
Article 69(7)
Evidence obtained by means of a violation of this Statute or internationally recognized human rights shall not be admissible if: (a) The violation casts substantial doubt on the reliability of the evidence; or (b) The admission of the evidence would be antithetical to and would seriously damage the integrity of the proceedings.
On its face, Art. 69(7) is unclear as to whose rights being violated would exclude the evidence. As may be anticipated, the practice of the ICC shows that challenges to the admissibility of evidence are generally on the grounds of a defendant’s rights being violated, not the violation of third parties’ rights but that jurisprudence also holds that the violation against the defendant may derive from the actions of a third party, suggesting that there may be room for argument that infringements of third party rights are a relevant consideration to the interpretation of the scope of the article.
For example, in the case of the Prosecutor v. Al Hassan, the defence requested to exclude statements that had been given by Mr. Hassan to the Prosecution while he was detained by the Malian authorities on the grounds that they had been obtained by the Prosecution during a period in which Mr. Hassan was subjected to continuous torture and cruel, inhumane and degrading treatment by the national detaining authorities. The Chamber accepted that Article 69(7) was not exclusive in application to breaches or violations perpetrated by the ICC Prosecution, and could apply to the actions of other actors. It found that on a plain reading of Article 69(7) the chapeau requirement of establishment of a breach of the Statute or internationally recognized human right required the establishment of a causal link between the violation and the gathering of the evidence as opposed to a link between the ICC Prosecution and the violation. Accordingly, the first determination to be made by the Trial Chamber was whether it has been shown that the evidence impugned was gathered or its gathering was facilitated by such a breach or violation. If that element was satisfied it was then for the Chamber to determine whether the second element was also satisfied – that is the violation casts substantial doubt on the reliability of the evidence, or its admission “would be antithetical to and would seriously damage the integrity of the proceedings”. It is only if both elements were satisfied that the evidence would warrant mandatory exclusion.
While there is no direct jurisprudence, a plain reading of Article 69(7) does not preclude evidence being excluded based on third parties’ rights being violated. As such, it would appear at least open to the defence to argue for the exclusion of evidence created or fed through AI tools if the creators of AI tools were guilty of violations of third parties’ rights. This is provided there is a clear link between those violations and the gathering or creation of evidence sought to be excluded. This should not be difficult to establish where the AI output derives from the use of a tool trained on data gathered in breach of human rights. If the Chamber accepts the relevance of violations of third-party rights to the chapeau requirement of Article 67(9) it is not beyond the realm of possibilities that the second element could also be satisfied. As is discussed further below, there are grounds to question the reliability of AI outputs given the evidence of bias in AI systems. However, it is not just bias that renders AI outputs potentially unreliable but also hallucinations – that is confidentially providing the user with information that is simply not true -and the proliferation and increased sophistication of deep fakes in the digital space. Such a challenge may be further supported by lack of disclosure of the data upon which the AI system was trained and the unexplainable nature of the actual process by which an impugned output was rendered given the opaque nature of the computational process – the so called “black box” issue. This lack of transparency rendering contestability of AI outputs difficult not only impugns the principle of equality of arms but also the right to an adversarial process, central elements of a fair trial at the ICC.
While this argument may well be made, given emerging demands on the primacy of the protection of privacy and data rights in AI development and use, the simpler argument may well be that admission of evidence gathered through an AI system that violated those rights meets the standard of being antithetical to and seriously damaging the integrity of the proceedings at the ICC.
Sorry, the comment form is closed at this time.