Digital Accountability Symposium: Mass Atrocities in the Age of Facebook–Towards a Human Rights-Based Approach to Platform Responsibility (Part One)

Digital Accountability Symposium: Mass Atrocities in the Age of Facebook–Towards a Human Rights-Based Approach to Platform Responsibility (Part One)

[Barrie Sander is  a Postdoctoral Fellow at Fundação Getúlio Vargas, Brazil. This is the first part of a two-part post. Part two can be found here.]

During a speech delivered at Georgetown earlier this year, Mark Zuckerberg proudly proclaimed that “our values at Facebook are inspired by the American tradition”. What Zuckerberg failed to mention is that only a tiny fraction (under 10%) of Facebook’s 2.45 billion monthly active users are based in the United States. Over the course of the past decade, Facebook has grown to become a dominant and essential channel of public communication with outsize influence over the governance of online speech in a rising number of societies across the world. Yet, despite Facebook’s increasingly global reach, the company has generally devoted least attention towards the needs of users and communities located in States beyond the borders of North America and Europe – sometimes to devastating effect.

In August 2018, for example, a report issued by the Independent International Fact-Finding Mission on Myanmar (FFM) concerning mass violence against Rohingya and other minorities concluded that Facebook had become “a useful instrument” for those seeking to spread hate and disinformation in a society where for most users “Facebook is the Internet”. The FFM explained that although the extent to which Facebook posts and messages had led to real-world discrimination and violence required independent and thorough examination (a question discussed further here), there was “no doubt that the prevalence of hate speech in Myanmar significantly contributed to increased tension and a climate in which individuals and groups may become more receptive to incitement and calls for violence”.

Myanmar is far from an isolated example. From fuelling President Duterte’s drug war in the Philippines to amplifying hate and disinformation amidst ongoing violence in Libya, recent years have witnessed spiralling accusations concerning the association of Facebook with the commission of atrocities around the world. What’s more, Facebook cannot credibly claim to have been caught off guard by most of these incidents. With respect to Myanmar, for example, Facebook received numerous early warnings about how its platform was being relied upon to shape events on the ground.

Nor can Facebook plausibly characterise its platform as a neutral mirror that merely reflects society as it is. In practice, Facebook exerts considerable influence over both the permissibility and visibility of online content: first, as a content gatekeeper, Facebook determines which categories of content are allowed and prohibited on its platform; and second, as a content organiser and amplifier, Facebook individualises the experiences of its users, prioritising some content over others, through algorithmic personalisation. To perform these functions, Facebook relies on a mixture of architectural design and moderation rules, enforced by systems that combine data-fuelled algorithms, community flagging, and human review.

With each new scandal, Facebook has been strong on apologetic rhetoric but weak on concrete action. Even when action has been taken, reforms have often been slow, piecemeal, and inadequate. For instance, while it is encouraging that Facebook has created a strategic response team to identify and establish concrete changes to the platform’s products in an effort to prevent violence in conflict-affected communities, the fact the unit consists of roughly one team member per continent leaves the impression that the move is more a public relations tool than a strategic solution.

In the search for more effective ways of improving Facebook’s governance of online speech, momentum has been building amongst UN experts, civil society groups, and academics for the platform – along with other major social media companies – to apply a human rights-based approach to content moderation. The company itself has even hinted that it might be open to such an approach. Facebook’s Vice-President of Policy Solutions, for example, recently confirmed that the platform’s moderation teams already “look for guidance in documents like Article 19 of the International Covenant on Civil and Political Rights” in determining where to draw the line on freedom of expression with respect to user-generated content.  

The starting point for defining a human rights-based approach to content moderation is the three-pillar framework set out in the United Nations’ Guiding Principles on Business and Human Rights (UNGP). Pursuant to the second pillar of that framework, business enterprises have a corporate responsibility to respect human rights by avoiding infringing on the human rights of others and addressing adverse human rights impacts with which they are involved. To satisfy this “global standard of expected conduct”, businesses should establish a range of policies, processes and procedures appropriate to their size and circumstances, including at a minimum: a high-level policy commitment to meet their responsibility to respect human rights (Principle 16); a human rights due diligence process that identifies, prevents, mitigates and accounts for actual and potential human rights impacts of their activities (Principles 17-19); verification of whether adverse human rights impacts are being addressed by tracking the effectiveness of company responses, whilst communicating relevant policies and processes externally to affected stakeholders (Principles 20-21); and appropriate remediation of any adverse human rights impacts they cause or to which they contribute (Principles 22, 29 and 31).

With respect to conflict-affected societies in particular, the commentary to Principle 23 UNGP explains that business enterprises should treat the heightened risks of becoming complicit in human rights abuses committed by other actors in such contexts as “a legal compliance issue” given the possibility of potential corporate and individual legal liability for acts that amount to gross human rights abuses. Moreover, in order to guard against exacerbating such situations, businesses should not only draw on internal expertise, but also “consult externally with credible, independent experts, including from governments, civil society, national human rights institutions and relevant multi-stakeholder initiatives”.

At a general level, adopting a human rights-based approach would enable Facebook to shift its predominantly ad hoc and reactive approach to the development of content moderation policies towards a principled and structured framework underpinned by the common conceptual vocabulary of human rights law. A human rights-based approach would also equip Facebook with a set of tools with which to assess the actual and potential adverse human rights impacts of its moderation rules, processes, and procedures holistically, spanning their conception, design and testing, their deployment in different contexts, and their ongoing monitoring and evaluation.

A particularly important line of inquiry, however, concerns the potential significance of a human rights-based approach to platform operations in the more specific contexts of conflict-affected and atrocity-afflicted communities. It is this line of inquiry which will be explored in Part Two of this post.

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, Symposia, Themes
No Comments

Sorry, the comment form is closed at this time.