Freedom of Expression and Its Slow Demise: The Case of Online Hate Speech (and Its Moderation/Regulation)

Freedom of Expression and Its Slow Demise: The Case of Online Hate Speech (and Its Moderation/Regulation)

[Dr. Natalie Alkiviadou is a Senior Research Fellow at Justitia (Denmark) working on the Future of Free Speech Project. She is co-author to some of the Justitia reports discussed in this piece.]

Four point two billion people are active social media users. This has given voice to previously marginalized groups. At the same time, however, extremism, hatred and abuse have become part and parcel of this reality. This has led to enhanced pressure on platforms from users, civil society organizations, advertisers and, importantly, from governments.

In May, France passed legislation compelling social media companies to remove ‘manifestly illicit’ hate speech within 24 hours. Companies which do not comply with this requirement would face fines of up to 1.25 million Euros. In June 2020, France’s Constitutional Council ruled that this law limited the freedom of expression in an unnecessary, inappropriate and disproportional manner. This law can be seen as a legislative precedent set by Germany and its 2017 Network Enforcement Act, which also imposes a legal obligation on social media companies to remove illegal content, including  insult, incitement and religious defamation within 24 hours and at risk of a fine of up to 50 million Euros. Two reports issued by Justitia demonstrate how this precedent has spilled over in more than twenty States, including ones with a poor track record when it comes to democracy and the rule of law.

On a regional level, the European Union’s 2016 Code of Conduct on Illegal Hate Speech Online requires IT companies to remove reported hate speech within 24 hours. Every now and then the European Commission arranges a ‘monitoring exercise’ during which national entities, such as NGOs but also public bodies, ‘check’ to see if the companies are doing what they promised, by reporting hate speech throughout a period of 6 weeks and monitoring if/how quickly the companies respond. In 2018, the European Commission recommended that Member States establish legal obligations for active monitoring and filtering of illegal content.

The issue of time and timing is significant, as also reflected in the French judgement. In this ambit, Justitia issued a report where it compared the time taken by domestic legal authorities to deal with hate speech cases (in comparison with the 24 hour deadline set out by the above laws). The report found that, overall, data extracted reveals that domestic legal authorities took 778.47 days, on average, from the date of the alleged offending speech until the conclusion of the trial at first instance. While the report acknowledges the key difference between private content moderation and judicial proceedings, it argues that the findings demonstrate that expecting thousands of complex hate speech complaints to be processed within hours, while simultaneously attaching proper weigh tot due process and free speech may be unrealistic at best.

The justification of such measures emanates from a European strategy of seeking to construct societal cohesion and protect minorities from harm through the silencing of ‘hate,’ the conceptualization of which is marked by a broader spectrum of phenomena, such as identity politics and political correctness. Within this framework, it is significant to note that there are ‘few, if any, shared understandings as to what amounts to intolerable speech’ in the communities we live in, which are characterized by cultural, social and political diversity. To this end, the practice of regulating ‘hateful’ content, apart from certain extreme situations of direct calls to violence, for example, is, by its very essence, of limited pragmatism. The current status quo in relation to content moderation is legally and conjecturally problematic and is negatively correlated with the alleged importance of expression in a democracy as set out by institutions such as the European Court of Human Rights which has found, in cases such as Féret v Belgium and Vejdeland v Sweden that merely insulting and ridiculing speech can be legitimately restricted by governments.

The above have resulted in more ‘hate speech’ content being removed as time goes by. For example, in the first quarter of 2018 Facebook removed 2,5 million pieces of content for violating its Community Standards. The removal of hate speech increased to 4,1 million in the first quarter of 2019 and 9,6 million in the first quarter of 2020. For the second quarter of 2020, more than 20 million pieces of content were deleted for violating Facebook´s hate speech ban. 22.1 million pieces of content were actioned in the third quarter of 2020 and 26.9 million in the fourth quarter. There was a slight fall in the first quarter of 2021, with 25.2 million pieces of content actioned.

Whilst there is no doubt that the pursuit of social justice is a necessary pre-requisite for the creation of a human rights culture and a sustainable democratic society, I am concerned by the intense encroachment on the freedom of expression as a means to this end. More specifically, the current situation, as illustrated in the above examples, has led to i) the augmented censorship of online content and (ii) the structural juxtaposition of private, profit- making companies, which are beyond the scope of international human rights law, as content moderators passing a value judgement on the ideas expressed. Private companies are not impartial court rooms, whilst the fear of penalties provide companies ‘an incentive to err on the side of caution rather than free expression.’ The fragility of the freedom of expression in this ambit is accentuated by the enhanced use of Artificial Intelligence (AI) for purposes of content moderation. This can be seen as a response not only to the sheer amount of content online but to the systematically increasing pressures posed by measures such as those set out in the first paragraph above. AI is not human and thus cannot pick up on the nuances of human communication. Moreover, the concept of hate speech is fluid and contested for humans, let alone for an algorithm. This has led to a lack of respect for the Rule of Law, a lack of accountability and a lack of transparency.

On a practical level, private companies are now in the position to take down  ‘legal but controversial speech.’ These issues were considered by a 2018 thematic report of the UN Special Rapporteur on Promotion and Protection of the Right to Freedom of Opinion and Expression. The report recommended a human rights-based approach to content moderation which avoided ‘heavy-handed viewpoint-based regulation’ and allowed for ‘company transparency and remediation.’ Importantly, the report urged States to refraining from imposing ‘disproportionate sanctions’ such as heavy fines on Internet intermediaries due to their ‘significant chilling effect on freedom of expression.’

Stakeholders such as States, regional and international bodies and IT companies should also take a step back and (re)consider what good actually comes from the practice of content moderation.

Although scholars such as Waldron, Matsuda and Lawrence have all argued for the necessity to restrict hate speech due to its harm on individuals and groups, there is, in fact, a doubtful causal link between the speech and such harm. Furthermore, scholars such as Ravndal have demonstrated that the rise of far-right extremism in Western Europe emanates from a combination of high immigration, low electoral support for radical right political parties and the ‘extensive public repression of radical right actors and opinions.’ He argues that such repression may discourage some, it may also push others to follow more violent paths. To add to this, allowing for hate speech restriction essentially means that speech ‘become free to the extent compatible with the state’s view’ and, now, the view of private companies, rendering it vulnerable to abuse.  Furthermore, stakeholders appear to have ignored the dangers of silencing on the flourishing of haters, illustrated by, inter alia, legal restrictions on hate groups in Germany resulting in their transformation into ‘vibrant, parallel, societies.’

Furthermore, stakeholders must also take a closer look at figures. Although there is a certain ‘hype’ surrounding the alleged online hate speech boom, research has shown something different. A 2019 study by Siegel et al looked at whether Trump’s 2016 election campaign and its immediate aftermath (6 months) contributed to the rise in hate speech or white nationalist language. It included an analysis of 1.2 billion tweets, 750 million of which were election related and nearly 400 million being random samples. The study found that on any given day, between 0.001% and 0.003% of tweets contained hate speech, “a tiny fraction of both political language and general content produced by American Twitter users.”

The Internet, and particularly social media platforms constitute the central agora of speech today. It seems that we have forgotten just how centrifugal freedom of expression is for our societies. It constitutes one of the ‘basic conditions’ for our progress and is pivotal for any sincere public discourse. Within this framework, stakeholders need to reimagine thefreedom of expression more generally, realizing its centrifugal significance for robust public discourse. They must also reconsider the current strategy private content moderation more particularly. This is not only due to the severe risks associated with current practices but, also, due to the inherent dangers attached to moderating what is perceived by some (but not others) as ‘hate.’

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, Technology
No Comments

Sorry, the comment form is closed at this time.