Old Doubts, New Doubts: Evaluating Digital Open Source Imagery in the Courtroom

Old Doubts, New Doubts: Evaluating Digital Open Source Imagery in the Courtroom

[Yvonne McDermott is a Professor of Law at the Hillary Rodham Clinton School of Law, Swansea University, UK and Principal Investigator of the TRUE project.

Stephen Sharp Queener is an associate of the Law program at the Starling Lab for Data Integrity, a Masters Student in Public Policy at Stanford University, and a current Fulbright Student Scholar in Germany.

Basile Simon is the director of the Law program at the Starling Lab for Data Integrity, and a fellow at Stanford University]

On 26 June 2024, the International Criminal Court (ICC) will issue its long-awaited judgement in the case of Al Hassan. This judgement ought to give some much-needed insight into how the Court will evaluate some of the newer forms of evidence and expertise that were presented at trial, including digital reconstructions, forensic image analysis, and evidence on the geolocation of digital open-source information. These newer forms of evidence can be expected to be increasingly seen in international criminal trials going forward; recently, Prosecutor Karim Khan noted his office’s use of “authenticated audio, photo and video material” within his application for arrest warrants in the Situation in the State of Palestine. 

Much has been written about some of the ‘new doubts’ that might arise in an age of artificial intelligence. Wider society and courts alike may struggle to confront the coming post-generative AI world, where ‘seeing may no longer be believing.’ As content faces increasing doubts surrounding its authenticity, routes must be found to empower courts to combat such challenges and interrogate what can and cannot reliably be seen in all forms of media. 

In this piece, we focus instead on some of the ‘old doubts’ that can surround audio-visual content: authenticity and ensuring that the content is thoroughly evaluated. We suggest that open source verification, still offers a clear path forward in helping to resolve these old doubts. As new technologies threaten to raise the bar on authenticity, we discuss how increased evidential burdens provide a call to action, and not grounds for despair, to enhance the media literacy and authentication practices of courts and fact-finders alike. 

Old Doubts: Authentication and Avoiding ‘Blind Faith’

As long as there has been audio-visual content, there has been the potential that such content has been manipulated. Video and photographic evidence, by depicting real moments in time, have potentially significant evidentiary value. However, much has been written about the ‘seeing is believing’ phenomenon, and the risk that such content may be attributed more weight than is legitimately deserved or demonstrated, risking prejudice if courts or fact-finders assume the ability of audio-visual content to provide an authoritative depiction of events.

Thorough interrogation of video evidence requires more than simply seeing and believing—fact-finders must be able to assess the reliability of what the video can and cannot show. This implicates not only the technical integrity of the video file, such as whether it has been edited or not since its capture, but also analysis of its source, filming perspective, and general legibility, requiring further information and discussion. Yet, considering the effect that these elements have on a video’s reliability is easier said than done. Courts, including the ICC (See Prosecutor v. Lubanga, para. 257) may believe that audio-visual content can ‘to a significant extent, “speak for itself”,’ while we know that a thorough examination of the source, perspective from which it was shot, and whether there is any corroborating evidence is needed to interrogate the content fully. The recently launched Guide for Judges and Fact-finders aims to set out some key considerations that should be taken into account in evaluating this content.

The messy, imperfect reality of much open source information (OSI), often captured without legal accountability in mind and found second-hand, means it often lacks technical indicia or access to sources or eyewitnesses who could attest to its authenticity, or verify key information such as location and time of filming. In this unideal reality, OSI, when presented without a strong method of verification, could risk a particularly damaging manifestation of a court’s ‘blind faith’ in video evidence, implicating fair trial rights by putting a high burden on the opposing party to combat misconceptions and false trust.

Resolving Old Doubts: Putting Open Source Verification before the Court

Luckily, finding ways to verify OSI so that factual conclusions can be drawn from content is precisely what many practitioners and scholars have been hard at work developing and refining. Different groups have been developing standardised methodologies for open source verification. In a legal context, an open source investigator, applying a methodology based on the principles found in the Berkeley Protocol, would most likely need to provide testimony to explain the investigative methods used and justify their conclusions within the bounds of their expertise. The Guide for Judges sets out these techniques in some detail, with examples.

The potential merits of this approach were demonstrated in a mock admissibility hearing hosted by the Global Legal Action Network (GLAN), Bellingcat, and the OSR4Rights project at Swansea University in 2022 to test the Justice and Accountability Methodology designed by GLAN and Bellingcat. The fictional case saw the prosecution present an expert witness to explain the open source verification techniques used, in arguing for the admissibility of a piece of OSI (a video of an airstrike in Yemen found on Twitter). The prosecution’s expert explained the techniques of geolocation, chronolocation, and corroboration through online searches in verifying the OSI. The defence argued that the piece of evidence should be excluded because it was lacking in authenticity and reliability, since its source was unknown; it was not the original video and it had clearly been edited by splicing two videos together; its metadata was unavailable, and its discovery was subject to unavoidable algorithmic bias. The defence also challenged whether an open source analyst could be considered an ‘expert’ under Part 19 of the Criminal Procedure Rules of England and Wales. 

Ultimately, the learned judge considered that the investigator could be considered an expert and was not, in the words of Lord Bingham, “a quack, a charlatan or an enthusiastic amateur.” While observing that the video clearly suffered drawbacks in relation to its authenticity and reliability, the decision noted that the case law makes clear that authenticity and reliability may be established via other evidence. In the fictional case, other evidence included: the expert report; other corroborating open source evidence; a statement from a doctor who treated patients after the attack, and the evidence in relation to the time of upload, which would have not allowed for sophisticated alteration of the contents to take place.

Similarly, in a recent Dutch war crimes case, a police investigator explained that the videos were uploaded to YouTube on or shortly after the day on which the incidents took place, which limited the possibility that they were manipulated, and searches indicated that these were the original videos, as no earlier versions were found online. The Court of Appeal noted that “the video is sufficiently reliable. The method of securing it has been meticulous, and investigation has yielded relevant upload information. There is no reason to assume that the content of the video was manipulated or that events were staged.”

These two examples show how content analysis can (1) establish circumstantial authenticity, and (2) provide the court with the methods necessary to properly interrogate the reliability of the OSI in question, in full awareness of its limitations and deficiencies, instead of blindly trusting the ability of the video to ‘speak for itself.’ 

In the GLAN/Bellingcat mock admissibility hearing, the judge noted that the jury would be given appropriate directions in respect of the identified drawbacks of the evidence. The TRUE Project is currently carrying out studies with realistic mock juries, where a full trial based on the same set of facts is played to diverse groups of laypeople and they are asked to deliberate in jury groups of 12. While the data has yet to be analysed in detail, and still more juries are to be recruited, the seven jury groups who have taken part in this exercise so far have been notably rigorous in their evaluation of the evidence and the expert testimony, with some deliberating for up to two hours.

New Doubts: Generative AI and Rising Challenges to Content Authenticity 

Both examples discussed above show that open source verification provides a framework for interrogating and determining what is and is not reliable from materials that have doubts to their authenticity. However, this relies crucially on the assumption that courts and fact-finders operate with full knowledge of the truthfulness of the digital items they interrogate. Will content and source analysis still be sufficient as we begin to enter a reality where all the authenticity of all media could be questioned? 

Some authors have argued that ideally, in light of the challenges posed by generative AI and deep fakes, in order for image-based evidence to be admitted, ‘a qualified technician or analyst will follow established industry-specific procedures using proven forensic hardware and software tools and will create a proper audit trail documenting all steps undertaken regarding the images, while ensuring that the final product is authentic, forensically sound, and reliable.’ (p. 130)

Two of this post’s authors, through their work at the Starling Lab, have long advocated for  investigators and documenters alike to further integrate technical methods to capture, preserve, and verify strong digital items and demonstrate the technical integrity of both their materials and their investigations (the ‘audit trail’). Yet we ought to be cognisant of the risk that an unreasonably high bar could carry for genuine evidence which has not yet been recorded and verified this way, and of the potentially inflated confidence and trust in the weight carried by markers of integrity.

Indeed, not everything can be answered by simplistic technical analysis, and not all evidence can currently provide the technical indicia necessary for forensic analysis. Widely-available ‘deepfake detection’ tools have been shown to be of varying reliability, depending on the data on which they were trained and the purposes for which they were designed. Green checkmarks seemingly implying all is well also present risks of misleading investigators and fact-finders into a false sense of security. 

Moreover, if expectations for demonstrable authenticity rise too high, we may see the development of what Riana Pfefferkorn refers to as the ‘reverse CSI effect,’ where evidence must overwhelmingly demonstrate its technical integrity to be afforded even a semblance of weight. Raising the bar for demonstrated authenticity risks killing the equalising promise that OSI offered, returning us to an institutionalised lock on access to justice. Police and state actors have not only the resources to maintain integrity, but also the credibility in the eyes of courts to wave off deepfake complaints; victims of international crimes and human rights abuses often do not

Conclusion: Open-Source Verification & ‘Raising the Baseline’

How our information landscape will develop in practice over the next decade or more, is anyone’s guess. The pace of the change to come is, however, certain and rapid. This change is fortunately matched by considerable work done on producing robust methodologies, innovative case studies, and critical reviews to ensure the reliability of OSI by the courts.

Education, expanded outreach, and integration of new digital industry standards remain essential to navigating this complex and changing terrain. Guidance based on the Berkeley Protocol, such as the Guide for Judges and Fact-Finders, and the tireless efforts of those in the field (e.g. WITNESS, Mnemonic) to ensure open access to on-the-ground documenters of methodologies and frameworks for creating and preserving stronger digital materials, work in tandem to minimise the adverse effects of growing burdens to demonstrated authenticity. In the face of changing ground, such as the recent widespread adoption of industry standards for provenance and content integrity, open-source verification standards can always be expanded to draw upon new sources of technical information, while still looking for other methods of corroboration in their absence. In this way, the integration of modern verification tools and techniques into judicial processes is a promising path forward. 

As digital content will inevitably face doubts to its authenticity, we find ourselves in a landscape of both risks and opportunities. On the one hand, we risk encountering all-or-nothing scenarios where only a small set of technically privileged materials may be accepted. On the other hand, we observe the penetration of cohesive OSI examination methodologies and ever-greater adoption and research in provenance and authenticity standards. In our reckoning with a need for evolution and change, the risk of raising the bar too high can be mitigated by raising our readiness to interrogate digital material.

Print Friendly, PDF & Email
Topics
Courts & Tribunals, Featured, General, International Criminal Law, Public International Law, Technology
No Comments

Sorry, the comment form is closed at this time.