Digital Accountability Symposium: Mass Atrocities in the Age of Facebook–Towards a Human Rights-Based Approach to Platform Responsibility (Part Two)

Digital Accountability Symposium: Mass Atrocities in the Age of Facebook–Towards a Human Rights-Based Approach to Platform Responsibility (Part Two)

[Barrie Sander is  a Postdoctoral Fellow at Fundação Getúlio Vargas, Brazil. This is the second part of a two-part post. Part one can be found here.]

Rising concerns and frustrations about the role of Facebook in exacerbating tensions within conflict-affected and atrocity-afflicted communities have coincided with growing pressure for the platform to adhere to a human rights-based approach to content moderation. The potential significance of this approach in such contexts may be illustrated by considering some of the possible reasons why Facebook was – by its own admission – “too slow” to prevent the spread of disinformation and hate in Myanmar.

State Actor Speech

One possible reason resides in Facebook’s general disinclination to restrict content posted by state actors such as government or military officials. While the basis for this stance remains unclear – an example of the platform’s more general transparency deficiencies – its roots appear to reside in two policies: first, Facebook’s so-called “newsworthiness exemption”, a policy that allows content to remain on the platform despite violating its community standards when the public interest in seeing it outweighs the risk of harm; and second, Facebook’s policy for banning “dangerous individuals or organisations”, which a spokeswoman for the platform has unofficially acknowledged focuses on non-state actors. Regardless of its precise foundations, Facebook seems to have neglected to consider the adverse consequences that may flow from its general reticence towards restricting state actor speech in contexts such as Myanmar where there is a history of state-sponsored violent oppression of minority groups.

In September 2017, for example, amidst international uproar that Myanmar’s military was engaging in “a textbook example of ethnic cleansing” of the Rohingya minority, Facebook designated a Rohingya insurgent group a “dangerous organisation” – a designation that results not only in the group being banned from the platform but also the removal of all content in support or praise of it – whilst failing to address inflammatory content connected to the Myanmar military. Only in August 2018 did Facebook begin removing accounts and pages tied to the military in a belated attempt to prevent its platform being used to further inflame ethnic and religious tensions in the country – by which point the Independent International Fact-Finding Mission on Myanmar had concluded that sufficient information exists to warrant the investigation and prosecution of Myanmar military officials for genocide, crimes against humanity, and war crimes.

Applying a human rights-based approach to content moderation, Facebook should, as a general rule, treat hate speech posted by state actors in the same way as non-state actors. This point was recently emphasised in a report authored by David Kaye, the UN Special Rapporteur on freedom of opinion and expression, which explains that given their prominent and potential leadership roles in inciting behaviour, politicians, government and military officials “should be bound by the same hate speech rules that apply under international standards”. According to Kaye’s report, in the context of platform policies governing hate speech, “by default public figures should abide by the same rules as all users” and only as “an exception” after an evaluation of context should such content be protected as, for example, political speech.

Earlier this year, Facebook clarified that the company takes into account a variety of factors to determine “newsworthiness”, including country-specific circumstances such as whether the country is at war, the nature of the speech such as whether it relates to governance or politics, the political structure of the country including whether it has a free press, and the risk of harm. The platform also noted that each of these evaluations “will be holistic and comprehensive in nature, and will account for international human rights standards”. Whether the platform is willing to invest in the necessary human resources to conduct detailed and informed contextual analysis of online content in different societies around the world in an effort to fulfil this commitment in practice remains to be seen.

Enforcement In Context

Another reason for inadequacies in Facebook’s content moderation practices in Myanmar concerns the platform’s failure to tailor its enforcement systems to the local context. One of the challenges of content moderation in Myanmar concerns the coded nature of many of the speech threats distributed on Facebook. According to the report of the Independent International Fact-Finding Mission on Myanmar, for example, “subtleties in the Myanmar language and the use of fables and allegories make some potentially dangerous posts difficult to detect”. Yet, as of early 2015, there were only two people at Facebook with Myanmar language expertise reviewing problematic content. Even by June 2018, the company only had around 60 Myanmar language experts for a userbase of approximately 18 million (a figure that has more recently risen to around 21 million). 

Other deficiencies in Facebook’s enforcement systems included inadequacies in the platform’s reporting tools, such as the lack of a reporting function to flag potentially violating content on its Messenger platform, the platform’s failure to consider the adverse implications for platform communication and proactive detection of violating content that might result from Zawgyi being the dominant typeface used to encode Burmese language characters in Myanmar, and the absence of an effective mechanism for emergency escalation, such as enabling approved civil society organisations to flag problematic material for prioritised review by the platform.

The human rights risks arising from the application of Facebook’s enforcement systems in the local Myanmar context could have been identified and at the very least mitigated had the platform established a structured, inclusive, and ongoing human rights due diligence process. As the commentary to Principle 18 UNGP makes clear, the purpose of human rights due diligence is “to understand the specific impacts on specific people, given a specific context of operations”, taking care to “pay special attention to any particular human rights impacts on individuals from groups or populations that may be at heightened risk of vulnerability or marginalization”.

Meeting these aims typically requires “assessing the human rights context prior to a proposed business activity, where possible; identifying who may be affected; cataloguing the relevant human rights standards and issues; and projecting how the proposed activity and associated business relationships could have adverse human rights impacts on those identified”. Particularly in a context such as Myanmar where the platform’s growth was proactively fuelled by its Free Basics initiative – an app that, in collaboration with mobile operators, offers access to a limited number of basic online services without data charges – Facebook’s failure to undertake human rights due diligence was grossly irresponsible.

In order to assess the human rights impacts of its platform accurately, Facebook should have sought “to understand the concerns of potentially affected stakeholders by consulting them directly in a manner that takes into account language and other potential barriers to effective engagement”; where such consultation was not possible, the platform should have considered “reasonable alternatives such as consulting credible, independent expert resources, including human rights defenders and others from civil society”. Moreover, as the commentary to Principle 21 UNGP explains, the adoption of a human rights-based approach also requires Facebook to “both know and show that they respect human rights in practice”, in particular by “providing a measure of transparency and accountability to individuals or groups who may be impacted and to other relevant stakeholders”. Instead, Facebook found itself subject to extensive ongoing criticism by civil society groups for consistently neglecting to engage local stakeholders and failing to transparently communicate how its moderation processes and practices operate in Myanmar – only agreeing to commission a human rights impact assessment into the platform’s presence in the country in 2018, and even then making sure to carefully restrict the scope of its mandate.

Conclusion

While it is not possible to comprehensively examine every dimension of a human rights-based approach in this post – other focal points include the preservation of user-generated human rights evidence and the treatment of content moderators – the preceding insights reveal some of the merits of the framework in the content moderation context. At the same time, it is important in closing to caution against viewing a human rights-based approach as a panacea.

Given the sheer volume of content moderated by Facebook, human and algorithmic error within the platform’s moderation processes is unavoidable. Moreover, since the corporate responsibility to respect human rights is non-binding, a combination of social pressure and smart forms of governmental (co-)regulation will likely be required to assist and incentivize Facebook to ensure its moderation systems are human rights-compliant. Yet, given the inherent limits of social pressure, as well as the risk of heavy-handed governmental regulation in this context, effective enforcement is far from guaranteed.

Beyond enforcement, it is also important to recognise that the implementation of a human rights-based approach to content moderation is not simple, raising complex questions concerning how to translate general human rights standards into particular rules, processes, and procedures tailored to the platform moderation context, including the diversity of services and spaces that Facebook offers (for example, advertising, recommendation engines, public pages, personal profiles, and private groups) and the wide range of societies in which the platform operates. Given this complexity, the risk inevitably arises that Facebook may try to co-opt the vocabulary of human rights to legitimize minor reforms at the expense of undertaking more structural or systemic changes to its moderation processes.

Finally, although this post has focused on Facebook – both the company and the platform – it is important to emphasise that the online environment comprises a wider range of actors (including governments, political parties, data brokers, mass media organisations, advertisers, and general platform users) and a more diverse set of technologies (including, for example, end-to-end encrypted messaging services such as WhatsApp – also owned by Facebook). In order to confront the concerns associated with platform governance in conflict-affected and atrocity-afflicted communities, attention will also need to be directed towards defining the responsibilities of this broader array of actors with respect to the wider range of technologies relied upon for online communication around the world.

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, Symposia, Themes
No Comments

Sorry, the comment form is closed at this time.