03 Mar Some questions regarding Facebook’s oversight board and remediation of human rights impacts (Part I)
By launching an “independent Oversight Board” as a central element of a governance system on content moderation, Facebook attempts to address some of the most acute challenges caused by the wide spread of hate speech and misinformation circulated through the platform. To be sure, tackling misinformation and hate speech requires a combined response from tech companies such as Facebook and governments. The latter have been slow to respond and, when they have done so, many have over stepped the boundaries of international law enacting impermissible restrictions of freedom of expression in the internet (has shown by a recent report by the International Commission of Jurists on internet restrictions in South East Asia).
Facebook’s approach is embedded in Mark Zuckerberg’s vision of a new independent system that will address both governance and enforcement issues. The Oversight Board will provide for a way for people to challenge Facebook’s decisions on content moderation through an “independent body, whose decisions would be transparent and binding”. Independence is the keyword highlighted by Zuckerberg, whose model includes a decentralized decision-making process and purportedly creates accountability and oversight, while providing guarantees that the decisions made are in the best interest of the community and not only the company.
Facebook says it will provide $130 million dollars in the trust, which will support the board’s activities, allowing the Board to operate at least for two full terms of approximately six years. The first member of the board’s administrative staff was recently announced, Thomas Hughes (former Executive Director for Article 19) will be the Director of Oversight Board Administration.
The recent release of the bylaws for the Oversight Board provides some detail of what it will and will not do. But at this stage, there are more questions than answers. In its early stages of development, the oversight mechanism was dubbed Facebook’s “supreme court”. What Facebook is creating could become one of the most powerful human rights “tribunals” anywhere, with direct “jurisdiction” over billions of people, and with the ability to impose sanctions and control the practical development of key elements of freedom of opinion and expression, privacy, and hate speech. The Oversight Board is essentially the largest, most powerful operational grievance mechanism established by any company. After years of investigating such mechanisms, we are aware of the opportunities and challenges presented here – with greater potential impact than any OGM we have analysed. Among other recommendations, our OGM report warns against designing and operating these mechanisms as if they were a State-based judicial body, but concludes that to be effective, they should respect essential elements of international standards of due process and reparations.
Facebook currently conducts global operations in most countries and territories in the world. With over two-and-a-half billion users, it has become the preeminent channel of communication, expression and information for an important segment of the world’s population, not to mention its significance in terms of share of global markets in advertising and connected sectors. In terms of wealth, if Facebook were a country it would have the 90th largest GDP on the planet. Therefore, what happens or not at Facebook and what is decided in terms of the shape the platform and how it operates, has an enormous impact on these people’s exercise of human rights, such as freedom of expression and association. The surge of “fake news”, disinformation, hate speech and similar forms of content in the platform has generated a strong expectation that Facebook will do something to stop these harmful practices, but the company has been slow to deliver. The launch of the Oversight Board has led many people to believe (or hope) that Facebook will start to take its responsibility more seriously and deliver effective measures to address potential negative impacts on freedom of expression and other human rights, and other cherished social and political values.
After the release of the bylaws in late January 2020, it became clear that the Oversight board would not be a standard operational grievance mechanism like most of those studied by the ICJ , whereby it would receive complaints from workers and people in the community or otherwise affected by the company’s operations who are seeking some form of remediation. The Facebook Oversight Board operates in a radically different context.
In terms of who can access the system, there are serious questions to be raised. The board – still in the process of formation – is in fact an “appeal” mechanism, which will only review content decisions which Facebook staff has already taken to remove from the platform. The request for review may come from users who are discontented with the removal decision, or from the Facebook staff who have already removed the content but think the issue raises a broader policy issue for the Board to consider. Crucially, this system does not contemplate the review of content that has not been removed at all and provides no remedies to a user, or a non-user for that matter, who feels that content is injurious to them and wants to “appeal” to the Board against the non-removal decisions at lower level. This is a major shortcoming of the process, given that much of the criticism Facebook has received has been due to its failure to remove hate speech or bullying content. As pointed out by a recent Amnesty International report, the problem lies at the core of Facebook’s business model – the surveillance-based system is based on human rights violations.
In some cases, the “user” complaining to the Board may be an organization, such a human rights organization or trade union, rather than an individual, but it is unknown whether special procedures will be adopted for institutional complaints or they will be treated the same as those from an individual. An NGO dedicated to issues of public concern may have an interest in challenging decisions that they deem affect freedom of information or other human rights, but will they have standing to submit a review request to the Board?
A feature that severely limits the users’ access to remedies is that the Board will have the discretion to review certain cases and to decline to review others. It is not yet clear whether the Board will be required to issue an explanation of why it has refused to review a case.
Users will have 15 days to ask for a review from the board and the review process will take up to 90 days counting from Facebook staff’s last decision on the case. In urgent situations, an expedited process can be triggered by Facebook, in which the review will then take up to 30 days. Board decisions are binding and must be implemented by Facebook, but the board can also produce non-binding policy advisory opinions. Some removal decisions may have an immediate and irreversible impact on the rights of certain users or other external persons, but the mechanism is silent on whether temporary relief in the form of “interim measures” may be prescribed while the “proceedings” are going on.