02 Jan How Myanmar’s Incitement Landscape Can Inform Platform Regulation in Situations of Mass Atrocity
[Jenny Domino is Associate Legal Adviser of the International Commission of Jurists. The piece draws upon her previous in-country work as Harvard Law School Satter Fellow, and builds on her forthcoming publication on legally conceptualizing Facebook’s role in Myanmar’s incitement landscape.]
The recently concluded The Gambia v Myanmar provisional measures hearing at the International Court of Justice (ICJ) renewed the focus on the crucial role played by Facebook in facilitating incitement against the Rohingya. The Gambia’s application included references to the United Nations Fact-Finding Mission’s (UN FFM) documentation of hate speech and inciteful statements posted on the platform. It alleged, among others, that Myanmar violated its treaty obligation under the Genocide Convention for inciting genocide, a punishable act under Article III of the treaty. This resonates with the latest report of the UN special rapporteur on the promotion and protection of the freedom of opinion and expression, asserting that state inaction against incitement to genocide may contribute to “very serious consequences for vulnerable communities” and is as “condemnable” as the speech itself.
An equally important development on the topic of online incitement happened at the same time that the ICJ hearing was transpiring, though with considerably less fanfare. On December 12, Business for Social Responsibility (BSR) released its Human Rights Review (Review) of the Facebook Oversight Board (FOB). This is the body set up by Facebook to decide “the most challenging content on the Facebook and Instagram platforms and issue policy advisory opinions on Facebook’s content policies.” Mark Zuckerberg himself announced the plan to launch the FOB in November 2018, and a highly consultative process on the FOB’s draft charter ensued. The FOB’s final charter was published in September.
Given the timing of the ICJ hearing and the BSR Review, it is difficult to resist the temptation to examine the BSR Review with Myanmar’s incitement landscape in mind. As efforts to refine content moderation are underway, one question deserves deep reflection: how should one legally characterize the damage wrought by Facebook in Myanmar? This is important to consider, as Myanmar is arguably the most emblematic and sinister case of our time on content moderation gone wrong, and because platform regulation is arguably an area where novel ideas have a realistic chance to take root. The creation of the FOB is itself an “unprecedented innovation.”
It is worth recalling that Facebook was remiss in its content moderation of Myanmar-related posts until it was too late, and significant improvements noticeably surfaced only in 2018, after other issues had converged with the Myanmar situation to result in very bad publicity for the platform. To repeat, Facebook posts depicting Rohingya as an existential threat to Myanmar’s Bamar race and Buddhist religion formed the factual allegations of The Gambia to call for state responsibility under the Genocide Convention at the ICJ. They are included in the UN FFM report, which in turn can reasonably be presumed to have informed, to an extent, the recently authorized preliminary investigation for potential crimes against humanity in Bangladesh/Myanmar at the International Criminal Court (ICC).
Gap in (corporate) legal responsibility
What do these parallel developments highlight? First, they reiterate the gap left open by current regimes of legal responsibility for punishable acts under the Genocide Convention, including the crime of direct and public incitement to commit genocide. Whereas formal legal regimes exist for states and natural persons, no counterpart regime exists for corporations. Some have suggested applying international criminal law to appreciate the harm wrought by platforms in mass atrocity situations. This is not surprising; it builds current efforts to expand the personal jurisdiction of the Rome Statute to legal or juridical persons, and reflects a similar trend in the business and human rights field to allow for the prosecution of corporations domestically for gross human rights abuses.
A significant focus of the contemporary movement to advance business and human rights has been on corporate involvement in human rights abuses, especially in conflict settings. Corporate accountability thus aimed to curb corporate interference with the individual exercise of human rights. In the platform context, this is illustrated by Facebook’s role in the spread of incitement in Myanmar.
But the Facebook issue in Myanmar shows another unique and “unprecedented” side of corporate power: of private “regulation” of state speech. Here, corporate power is not only wielded against the individual person, but against the state itself. When Facebook banned the Myanmar military’s Commander-in-Chief from the platform, it effectively regulated the actions of a state actor. In a regulatory paradox, the traditional roles of regulator (state) and regulated entity (non-state) are reversed. The idea that corporate power can trump the state’s may not be a groundbreaking idea; the remarkable fact was that, in Myanmar, it did. In this instance, what sort of public-facing obligations should a corporation with such awesome power – outside the regulatory jurisdiction of what would traditionally be considered a host state, and yet able to an extent to regulate the actions of that state– have when operating in situations of mass atrocity?
Rightsholders and remedy for incitement
The BSR Review recognized the relevance of rightsholders, including non-users, in the mandate of the FOB. Under the FOB’s final charter, access to the FOB is limited to Facebook users. BSR recommended that any version 2.0 should include access for non-users with respect to content that “directly or indirectly impacts” them. It also suggested to include “irremediability” (i.e., whether a remedy will restore the victim to the same or equivalent position before the harm) as one factor for identifying severe cases to be prioritized by the FOB, along with scale (number of people affected) and scope (seriousness of the harm). This tracks the language of the UN Guiding Principles on Business and Human Rights (UNGPs).
Further, BSR recommended the establishment of effective operational-level grievance mechanisms to the extent possible. This ensures that companies operate under a human rights-protective framework (see here the International Commission of Jurists’ proposed performance standards to assess effectiveness). However, BSR was quick to note that, in contrast to traditional industries where there exists a “bounded number of rightsholders based in clearly defined geographical areas,” Facebook’s grievance mechanism will involve billions, with both users and non-users alike. It limited its recommendation through a reasonableness criterion.
In the context of incitement to genocide, the BSR recommendations are important in that they underscore the impact of Facebook’s Hate Speech policy on users and non-users alike, as I articulated in a previous post. Incitement to genocide definitely qualifies as the most severe harm in the platform context. However, this is one area where implementation may prove the most challenging. For instance, how does one assess direct or indirect impact in the realm of speech? As Jeremy Waldron has argued, the harm in hate speech lies in the violence inflicted on the social fabric that shapes public perception of a vulnerable group, rather than a specific assault on an individual person’s dignity. International criminal jurisprudence on incitement betrays this difficulty in their unclear and inconsistent application of evidentiary standards and causation analyses in prosecuting speakers, further compounded by genocide’s specific intent requirement. Thus, any such grievance mechanism will be similarly beset with the challenge of designing a procedure to identify rightsholders affected by incitement.
BSR also suggested, along with other remedial measures, to provide financial compensation “where warranted” and to the extent that harm can be “economically assessed,” noting that the most severe harms may require such remedies. Again, damage may be hard or “unreasonable” to quantify. Does hinging financial compensation to an economic assessment preclude non-users impacted by Facebook’s Hate Speech policy? If so, this seems to render the remedy of compensation the least useful for those that may be the most harmed by content moderation.
Regulation of platforms operating in situations of mass atrocity
One notable omission in the BSR Review is how international criminal law and its institutions can shape the FOB’s work, in the same way that Facebook’s Myanmar-specific improvements were informed by the UN FFM report. Though the UNGPs make reference to international criminal law, and so can be implied in the BSR recommendations, BSR nonetheless could have given more clarity on how the work of relevant institutions should influence the FOB’s exercise of its mandate. At the least, international criminal law can serve not necessarily as analog judicial precedents to FOB cases, but as a signpost to ensure institutional alignment in areas where the mandate of (private) platforms and public institutions intersect. For instance, if the ICC Prosecutor issues a public statement on a potential situation, as has been done in the past, platforms operating in these contexts could put the country under priority assessment to further guide the FOB’s case selection, and to prevent further escalation of violence.
This begs the question: must cooperation of business enterprises be required, and if so, how? Companies’ profit motive will not always align with public interest, and the court of public opinion cannot solely be relied upon as the means to regulate corporate involvement in widespread human rights violations. As the Myanmar example has shown, there seems to be merit in legally obligating compliance and cooperation, at least in situations of mass atrocity. Should regulation be in the form of a penal sanction, as an international criminal law analysis suggests? Should formal arrangements be established between international organizations (e.g., UN, ICC) and dominant social media platforms? Should another form of corporate accountability mechanism be explored? Is domestic regulation sufficient? Regardless, reframing the debate to require platform regulation in mass atrocity contexts, rather than simply incorporating human rights standards in the way platforms moderate content, seems called for.
Sorry, the comment form is closed at this time.