Some questions regarding Facebook’s oversight board and remediation of human rights impacts (Part II)

Some questions regarding Facebook’s oversight board and remediation of human rights impacts (Part II)

To continue with our review of the Facebook’s oversight board in this second part we focus on the composition and methods of work of the Board, its administering trust and applicable procedure. Here again, we have more questions than answers.

The Board is to be composed of 40 “independent experts” appointed for three-year mandates. They are to be experts on a broad range of relevant fields: gender, social, political and religious. Members will be able to call upon additional expertise to provide guidance on local context and cultural norms.

For the initial formation of the Board, Facebook will appoint co-chairs and together with them, select candidates for the remaining positions in the Board. Once the original forty members of the board have been selected, they will be responsible for choosing and screening new members, who will be formally appointed by the trustees.  But there is not much more information on whether the members may be removed and the grounds for such removal, and how else their tenure will be secured with a view to guarantee independence.

A limited liability company (LLC) will be created by the trust to administer the oversight board. The Director of the Oversight Board and the case managers will be employed by the LLC, and will be responsible for analysing cases, applying the criteria that will be set by the selection committee once it has been established. A selection committee composed of members of the Board will, therefore, have a pool of relevant cases available, selected by the case managers, in accordance with the committee’s criteria. They will decide by majority vote which cases go to the Board for review. Board members will serve on the selection committee for three months, after which the committee membership will rotate, allowing other Board members to serve. Since the selection committee will determine the criteria for case selection, having members rotate every three months could create confusion and unpredictably, especially if they opt to adopt new criteria each time, rather than carry over that of their predecessors. It may be counterproductive to have a case selection criterion that members need to have a new debate and agree upon every four times a year. Although, this method may allow the flexibility necessary for the board to change priorities more frequently and react to pressing issues.

While the composition and methods of work of the selection committee are fairly developed, there is very little information publicly available on the procedure for the decision-making process of the board. What is know is that the board will receive a case file with relevant information about the case, including Facebook’s rationale as to why the content was removed from the platform.

The person who is soliciting the board’s review has the opportunity to submit a written explanation as to why they disagree with Facebook’s measure to remove content. While this person does not necessarily need to be the one who posted the content, they must have a Facebook or Instagram account.  This requirement then excludes nonusers who may nonetheless be affected by harmful content and by the fact that Facebook allowed the content to be posted or was unduly slow to take it down.

The process from the point where the board decides to take up a case, reception of the required information for deliberation and the final decision is shrouded in mystery. It is unclear whether there will be any further investigation in relation to the content or the admission of additional evidence.

Another grey area concerns the intervention of third parties. The charter and bylaws do not address the issue of whether the user may have access to any kind of advice and/or support in relation to the formulation of its claim or the proceedings and the available options. At the moment, the bylaws foresee two cases where outside sources may be asked to intervene. First, the Board may request information from a pool of experts from a range of subject matters, such as academics, linguists and researches, on specific issues like culture or region.  A second opportunity may be when the Board requests briefs from advocacy public interest organizations. It is important to highlight that in both possibilities for intervention the Board has the full discretion to request or not the intervention of external persons and the claimant user may not be even notified of their decision. It is unclear whether there may be other options available for users to support their argumentation of their case, or for third parties who may have a legitimate interest in the process when it involves a matter of public interest such as the moderation of certain political content.

In addition to the Board’s being unable to address content in advertisements, Facebook has also removed from the Board’s review purview the possibility of analysing content that may go against domestic law. To an extent that makes sense. Users may not have a legitimate claim to post content that is clearly against national law – although this determination will require some level of expertise on national law- but in certain cases, national law may be inconsistent with international human rights obligations for the concerned State as showed by the ICJ in its recent report on South East Asia.  But the content that is banned in one jurisdiction may be allowed under the laws of another jurisdiction and the Facebook decision to remove that content and its refusal to review that decision on account that it is unlawful in one jurisdiction may unintendedly put in effect a global ban on that content. But this may not be compliant with policies and laws of other jurisdictions -and most crucially, international human rights law.

Most importantly, as business corporation, Facebook has its own global responsibilities to respect and uphold all human rights in accordance with the UN Guiding Principles on Business and Human Rights and other international standards. Abiding by national law may be a necessity in certain cases, but certainly it is not enough as a company policy.

The Board is also not allowed to review certain decisions of staff to remove content from the platform when this may cause unfavourable government action against Facebook. Setting aside the vagueness and imprecision of language such as “unfavourable action”, this rule reflects poorly about the company commitment to human rights when it fears adverse political consequences. Many human rights defenders in the field in countries with poor human rights records may wonder whether Facebook is a reliable ally for human rights in the world or will it blink at the first sign of government bullying.

This brings us to issues in relation to the normative framework that Facebook is to use to set up and operate its Oversight Board. Content posting is arguably the core of operations by social media companies like Facebook, and the impact of posting or removal decisions has a potential direct and obvious impact on the rights such as freedom of expression and information, , and privacy. However content may also lead to the commission of serious human rights violations entailing  crimes under international, as seen in Myanmar. In addition,  other human rights may be engaged, such as freedoms of assembly, association, and the right to fair trial and due process. But Facebook does not seem to take all these human rights on board. As stated in the charter the basis for the board’s decisions are Facebook’s policies and values. The platform’s Community Standards have been upgraded in September 2019, to include principles of voice, dignity, authenticity, safety, privacy and a reference to international human rights standards. But the charter and the bylaws of the oversight board make no concrete commitment to the use of an international human rights framework to their decisions on content moderation. True, the introduction of the bylaws state that the purpose of the Board is to protect freedom of expression, and Mark Zuckerberg has publicly stated his commitment to freedom of expression, but given Facebook’s  roots in the United States, this suggests a legal framework grounded on US Constitution first amendment jurisprudence, when for a company with global operations, the international human rights framework (especially International Covenant on Civil and Political Rights’ articles 19 and 20) should be a more suitable framework.

There are several other issues that would need to be clarified for the Board to become an instrument for eventual effective redress for users affected by harmful content and decisions made by Facebook staff. For instance, as said above, there are questions as to the composition and legal qualifications of the Board members, especially when they will make determinations as to compliance with national law and international standards. Will they have representation from various regions and legal traditions, or will they entirely rely on their secretariat or the advisory board to be provided by external experts on request? Will the Board operate following an adversarial model, where the Board members will decide between arguments presented by two sides, or will the Board follow an inquisitorial model, where the Board members act as investigators and questioners? And, what is the standard of review to be used and the burden of proof set on the complainant? Is there a presumption of publication (or nonpublication) of the Board’s decision? Can prior restraint be place upon future posting?

We do not know much either in terms of how Board decisions will influence or serve as precedents of future decisions so that the Board does not need to repeat the same lines in future cases that disclose essentially the same facts or issues.  Since the Board has the power to decide on policies and those decisions are binding, it may decide to turn its decisions on specific cases into policies of general application.

Nor is it clear the nature and form of any reparation the Board may grant to the complainant in its final decision other than the eventual re-posting of the content unduly removed from the platform. But, will aggrieved users at least get an apology from Facebook? Will they have the option to something else including some form of compensation or guarantees that the incident will not happen again?

But there seem to be room for some cautious optimism. Although the current commitment to human rights law is narrow, there is room for improvement as the full scope of the Board’s mandate has not been fully defined yet. In the bylaws, Facebook states that it will continue to develop the infrastructure required to support the board’s review of content, since the board’s scope will grow (pag 16) and “in the future” people will be able to “request the board’s review  of other enforcement actions (…)” (p. 22). Facebook has not provided timelines as to when these expected future improvements may be implemented. The board is expected to begin its activities during the summer and hopefully, once board decisions are being made, the process will become more well defined.

Print Friendly, PDF & Email
Topics
General
No Comments

Sorry, the comment form is closed at this time.