08 Mar A Short-Term Option for Addressing Misinformation during Public Health Emergencies: Online Nudging and the Human Right to Freedom of Thought
Richard is Research Fellow in Public Health Emergencies and the Rule of Law at the Bingham Centre for the Rule of Law, British Institute of International and Comparative Law. As part of a new project funded by the UK Arts and Humanities Research Council (grant no. AH/V015214/1), his current research concentrates on public health emergencies from a rule of law and good governance perspective, with a view to building public trust in data-driven responses to public health emergencies.
The matter of misinformation is one that societies have dealt with for centuries. In 1796 Nicolas de Caritat contended that a free press would create a more informed public and help advance knowledge. Commenting on the work of Caritat, John Adams was sceptical: ‘There has been more new error propagated by the press in the last ten years than in an hundred years before 1798’ (page 40). The accuracy of information that is circulated for public consumption has varied throughout human history. Yet the scale of readily available information today is immense, which presents problems for anyone attempting to situate themselves regarding a particular issue.
This scale of information, combined with the ease at which it can be accessed and the speed at which it can change, is something that even those with vast resources struggle to navigate. For example, the BBC reported that Ali Khamenei called for ‘an attack on Donald Trump in revenge’ for the killing of Qasem Soleimani. The Diplomat pointed out however that this news was based on a Twitter post from ‘a fake account which closely resembled Khamenei’s’. Another example is a New York Times article on the International Criminal Court and its jurisdiction over the situation in Palestine. Kevin Jon Heller has pointed out the ways in which this story is ‘simply false’. Although both are concerning, the first example is but one of many highlighting the problems that arise when social media is involved in the dissemination of information. However, the use of social media has become a widespread and engrained part of many societies, meaning it now has a number of roles (for example, promoting health equity). These roles have become more pressing in light of the COVID-19 pandemic (see, for example, here).
Throughout this public health emergency, misinformation has been a factor that has contributed to a variety of outcomes. Whether it is falsehoods regarding vaccines, or believing that drinking cow urine will ‘stop the effect of infectious coronavirus’, the effects of such stories are significant. Considering governments implement nudging through the use of digital technology tools in order to ‘improve’ outcomes for the public benefit, implementing online nudging to address misinformation may be a form of good governance during a public health emergency. This article illustrates how online nudging has the potential to address misinformation in the midst of COVID-19 and why this practice raises concerns for human rights, in particular the right to freedom of thought. The themes raised here have particular implications for the interface between digital technology tools and human rights, which are not only pressing issues now and in the short-term, but will continue to be in the future, especially if (or when) another public health emergency occurs.
The potential of online nudging during a public health emergency
Nudging is a concept that was conceived by Christine Jolls, Richard Thaler and Cass Sunstein and developed by the latter pair (see also here), one that has gained prominence in the field of behavioural economics and is expanding to other areas. In sum, nudging involves the creation of settings for the purpose of guiding decision-making and influencing conduct towards a particular result. Proponents argue that nudges exist to ‘improve’ the choices that human beings make. Opponents have raised concerns, in particular that nudging can affect and moderate autonomy to an extent some consider manipulative (see, for example, the work of Christopher McCrudden and Jeff King). With respect to public health, there is evidence to suggest that people approve of nudging, with nudges being perceived as more trustworthy when implemented by experts instead of policymakers.
In the context of social media use during a public health emergency, nudging has considerable capabilities in addressing online misinformation. One study shows the ways in which online nudging can help protect individuals and communities by guiding them towards making decisions that are for their own safety and that of others within their community. An example of an online nudge that could help stem the spread of misinformation during COVID-19 is the display of alternative sources on news feeds. This entails presenting alternative links to separate sources that are displayed directly below the original content. For example, if a user’s connection posts an article about ‘How Cow Piss Cures Covid’, directly below that content would be links other sources, for example, the World Health Organization page on how to report misinformation online, and another article that refutes the cow piss claim.
Another example of an online nudge is fact-check alerts and labelling. There are two components of this nudge: (i) how people perceive information presented to them, and (ii) how they respond to that perception. Addressing the first component consists of providing a label of some sort to raise the awareness of users about the reputability of a source (‘verified’ accounts on social media platforms are a form of this labelling). In addition to branding user accounts, content that is posted on news feeds can be accompanied with a marker flagging that source as being potentially factually inaccurate or wrong.
Addressing the second component consists of relying on the biases and heuristics of human beings (see here and here). With respect to fact-check alerts that warn users of potential misinformation, there exists the likelihood of dissuading users from sharing sources that contain misinformation on the COVID-19 pandemic. The alert highlights a potential loss for the user (damage to personal reputation), nudging them toward not sharing the potentially false information with their connections in order to maintain their status among connections. The crucial problem here is connections that share the same views and groups in common, in which sharing false information still leads to collective approval (for more on confirmation bias see here and here). That said, the combination of alternative source nudging with fact-check alerts and labelling user accounts has considerable potential to assist in addressing online misinformation during public health emergencies.
The implications for the human right to freedom of thought
Steps are being taken to address online misinformation during the COVID-19 pandemic that do not utilise online nudging. For example, Facebook announced on 8 February 2021 that it will ‘remove false claims on Facebook and Instagram about COVID-19, COVID-19 vaccines and vaccines in general during the pandemic’. However, misinformation that is not covered by this policy and that which exists on other platforms still requires addressing when content is not removed.
Online nudging offers much potential in addressing misinformation during public health emergencies. That said, the human rights implications of this practice require consideration. In general terms, there exists an ethical issue of so-called ‘elites’ creating and applying nudges to online news feeds that they think will help so-called ‘non-elites’ make ‘smart’ decisions (see this work of Michael Sandel at 81-112). Aside from the condescension that arises with practices and communications that run along the lines of ‘we know better than you’, which inhibits public trust (see, for example, here and here), nudging raises additional human rights considerations.
There are a number of human rights dimensions to the current pandemic that have been highlighted (see, for example, here), including with respect to misinformation (see here and here). However, the human right to freedom of thought has received comparatively little attention. Yet the practice of online nudging presents a particular challenge to this human right, which is enshrined in a number of legal instruments. According to the Office of the United Nations High Commissioner for Human Rights, the right to freedom of thought is ‘far-reaching and profound; it encompasses freedom of thought on all matters’, including ‘personal conviction’. While this right is frequently understood from the perspective of its manifestation, it also applies to the internal thought-processes of human beings. Although the idea that human beings have free will remains contested, the right to freedom of thought is inextricably linked to self-autonomy. It is this human agency, in which human beings have the capacity to shape their own decision-making and conduct, that is at friction with online nudging.
Nudging is a tool for guiding decision-making and influencing conduct towards a particular outcome. A nudge is a means of influencing free thought by prompting the consideration of alternative options, which, depending on how these options are presented, will increase/decrease the probability of one particular option being chosen over others. The desired outcomes that people are nudged towards depend on the preferences being set by the people constructing the applicable choice architecture. This means that selected sources of information will be considered preferential over others, and applied as such when implementing online nudging. This is the point at which the human right to freedom of thought becomes limited by preferences set by those with decision-making power at social media companies. The work of Shoshana Zuboff starkly illustrates the significance of this right interfacing with online nudging, in which human beings are continuing to lose ground with respect to shaping their own thoughts and futures.
Although nudging is not a coercive process, it does recalibrate individuals’ thoughts towards making choices that are ostensibly ‘best’ for them and others. Yet who decides what is ‘best’ in terms of the information that circulates online during public health emergencies? Is the BBC a more trustworthy source than The Diplomat in terms of accuracy? Should the public trust the New York Times over the words of an expert? What alternative sources should be placed in users’ news feeds? A key to addressing these questions is involving knowledgeable individuals in the nudging process with respect to selecting the information that is presented as a factually accurate source and/or an alternative source for a particular matter. Online nudging is but one of a number of options that can help align human behaviour with the recommendations of public health experts during public health emergencies (see also here). But these experts need to become part of the online nudging process. This is particularly important in light of a study from the Centre for Research on the Epidemiology of Disasters showing that experts are the most trusted group of people regarding information on COVID-19 measures, even more so than individuals’ close contacts, which are trusted more than journalists and politicians (p. 31).
If online nudging becomes widely used to combat misinformation during public health emergencies, then it sets a problematic precedent to allow social media companies to be the arbiters of truth. The question posed by Shoshana Zuboff of who decides who decides regarding what social media companies can and cannot do is pressing in the context of combating misinformation. Social media companies thus require regulations from governments that channel their growing influence on humanity. Perhaps rules that provide oversight from non-governmental experts that do not have a commercially vested interest in the particular company at hand or its competitors. In addition to the human rights case regarding freedom of thought, there is an economic case for effectively regulating social media. The general case for better regulation is a strong one, especially in light of developments that put the very future of human behaviour at risk of becoming mechanised more so than today’s (already worrying) standards.
The excesses of data that people give away to social media companies, which is then used for the purpose of generating profit by promising predictability, certainty and guaranteed outcomes to third parties, is a practice that harms society, and will continue to do so in the future. Nevertheless, while social media exists it can also be used as a tool to help address misinformation during public health emergencies, such as by implementing nudging on news feeds. Despite the broader implications for the human right to freedom of thought should online nudging be used for other means, interventions that assist in combating misinformation are needed at this time. Online nudging helps people navigate large amounts of information and can thus be used during public health emergencies to situate and guide people during such times in a way that contributes to their own and others wellbeing.
However, online nudging should only be seen as a short-term form of good governance for addressing misinformation during these times, especially because nudging only treats the symptoms of the misinformation problem. The longer-term solution is arguably less contentious, because it treats the cause: there needs to be a concerted effort to improve access to, and quality of, education throughout the world. Misinformation will perhaps always be around on this planet for so long as human beings are around. Yet it need not be an issue during the next public health emergency. Whether it will be depends on us learning lessons from what is currently happening during this pandemic. The wake up call has been loud. Now is the time to pay attention to it and prepare accordingly.