11 Jun Facebook’s Answers to Questions about its Human Rights Policy
[Sam Zarifi is the Secretary General of the International Commission of Jurists (ICJ). The list of questions in this post was crafted with the input of ICJ and the Opinio Juris Editorial Board.]
Recently, Opinio Juris featured a contribution from Miranda Sissons, Facebook’s Human Rights Director, outlining Facebook’s newly launched human rights policy with the agreement that Facebook would engage in a follow-up post in a question and answer format. Please see that exchange below:
Facebook has repeatedly asserted its position that it is not legally bound by international human rights laws, yet there is growing jurisprudence at the national and even regional level that binds private companies, and/or their individual officers and stakeholders, and holds them accountable for violations of international law. How will Facebook adapt its conduct (and its human rights policy) to reflect these developments?
Nothing like starting with an easy question! Our policy, the UNGPs, and human rights principles exist alongside many other relevant legal frameworks in many other jurisdictions. Some of those legal frameworks are aligned with human rights principles; many aren’t.
It’s a period of intense development of legislative and regulatory frameworks. The digital space is arguably the most dynamic and challenging of any in the human rights world—and the UNGPs are just ten years old. Our policy and our work need to reflect those realities. But Facebook recognizes its global impact, and with it, our responsibility to understand, appreciate, and respect international frameworks. This policy is an important action step in that regard, and a strong basis for further action. We’ll be trying to develop our work as those external frameworks develop.
How will Facebook interpret and apply the international standards to which it has referred to as guiding its Human Rights Policy? Will it follow the jurisprudence of the various UN experts’ bodies, special rapporteurs, and regional tribunals, opinio juris, or State practice?
The policy covers a wide range of human rights issues in what is a very dynamic field. In order to keep up with key human rights thinking, we regularly consult the authoritative guidance of various expert bodies including the UN treaty bodies, and have strengthened our engagement with UN (and regional) special rapporteurs to learn from their work – and find opportunities to collaborate.
We also have a very extensive external stakeholder engagement process for our content policy development, and regularly consult with academics, UN officials, and activists to hear their advice and views about how best to integrate human rights standards and protections into our policies and enforcement.
We expect our annual human rights disclosure report to be another important opportunity to reflect on key sources of guidance, and (most importantly) we seek to use them to guide policy development and decision-making. Because ultimately all of this is to ensure we can know, show, mitigate, and prevent human rights risk in a very dynamic sector in a very risky world.
Facebook has stated that “When faced with conflicts between such laws and our human rights commitments, we seek to honor the principles of internationally recognized human rights to the greatest extent possible.” Can you describe the parameters of this extent? What about instances where domestic law is at odds with the State’s international legal obligations? Will you publicize instances of such conflicts, and how Facebook resolved the conflict? Will such decisions have precedential value?
The language in the policy reflects the language of the UNGPs, and reiterates our long-standing commitments as a member of the Global Network Initiative.
The language above on conflicts of laws and human rights commitments is a key principle. We use it, every day, to assess government requests to takedown content or disclose user data.
There’s an active, transparent, robust series of review and protection steps we take to try to ensure such requests are aligned to human rights standards and that we meet our GNI obligations (which we’re independently assessed on our implementation of every two years).
These decision-making and review processes aren’t new, and our practice is fairly well-established – which is a good thing, as it’s coming under increasing pressure as civic space closes: a large number of governments are trying to roll the parameters of practical defense back in many ways, while others are of course trying to develop rights respecting regulation.
But yes, companies don’t have the power of states. States sometimes use legal, political, and adversarial pressures to try to force us to remove content or take other steps that they desire, but that violate freedom of expression or privacy standards, or are undesirable in other ways.
A lot of the substantive content of certain rights isn’t always well articulated, such as protections for dignity and reputation related to freedom of expression in the ICCPR. Internet shutdowns and blocking (as in India and Myanmar), throttling (as in Turkish legislation), threats of criminal prosecution (as in Brazil and Thailand), physical threats or acts against infrastructure, intimidation. . . there are myriad forms of pressure against a target, as well as routine or perverse conflicts of laws.
Yes, transparency is a powerful mitigation. It’s an important principle in its own right – and we do publish regular transparency reports, which I think can be extremely informative if read thoughtfully. (We’ve also tried to improve the way we communicate with a new transparency center. )
But as a rights defense tactic, transparency tends to be most helpful if others also defend that right. Frameworks and practices are dynamic, and there are a lot of concerning developments. Many national and international human rights groups have expressed concerns – shared by Facebook – at the implications of the amended social media law passed in Turkey in July 2020. The Global Network Initiative, of which Facebook is a member, recently issued a letter expressing serious concerns at the way new information technology laws put rights at risk in India.
How will Facebook identify human rights defenders? How will Facebook resist efforts by national authorities seeking information to target human rights defenders and political opponents?
Facebook, Instagram, and WhatsApp are incredibly important tools for people to exercise their human rights, and for human rights defenders. The company has in fact worked with defenders since before 2011.
For example, we’ve funded digital security support services; digital security and other training; developed a robust stakeholder engagement process for our policy development that routinely includes defenders; are taking down networks of adversaries that target human rights defenders; and provided certain forms of practical support.
But it’s the internet. User support isn’t great. Global scale is incredibly intimidating, and means that prioritization is key—something very hard to see from outside. Our goal is to develop more consistent, replicable, and reliable means of interaction and support. We did specific due diligence on this in 2020, and have worked hard to follow up. We now have an internal group focused on this. We’ve also worked to ensure defenders’ voices, experiences, and input help influence our policy development and product changes.
A good example of that is the launch of a powerful anti-bullying protection, the ability for users to control who comments on their public posts.
In terms of content policies, I’d point to the expansion of our Misinformation and Offline Harm policy in late 2019 as one example of defender-related changes, and the adoption of the veiled threats policy in mid-2020 as another, with more on the way.
On product, we’ve experimented with co-design and also with real time due diligence. There are concrete results. For example, on March 31 we launched a new safety feature in Myanmar that allows you to lock your profile so that non-friends cannot see photos and posts on your timeline. This feature protects not only the activist using it but their friends and contacts.
We know from our own experience, from defenders, and the most recent report on death threats against human rights defenders by Mary Lawlor, the UN Special Rapporteur for Human Rights Defenders, that we need to do more to protect defenders from abuse on the platform. So we’re trying to translate that knowledge into practical, actionable steps, knowing there’s more work to do.
As a result, I have two requests: first, if you’re reading this, please ensure you use app-based two factor authentication or a physical security key for all your online accounts. Don’t use SMS, if you can avoid it. And two, if you’re concerned about bullying and harassment, I want to repeat: as of late March 2021 users can now control who is able to comment on their public posts (instructions here). We listened to our users who asked for this important harassment mitigation tool, and have made it happen.
Much of the early discussions around Facebook’s responses (including actions taken by the Oversight Board) have revolved around removing particular posts (or not). In other circumstances, will Facebook revise its algorithms to drive less traffic to a site or person, or ‘nudge’ people toward more factual sources of information, or toward human rights defenders?
Until just a few years ago, the approach to content on social media platforms was a binary one – allow or remove. And, human rights are so complex and nuanced where we need a full range of options – including new ones that will be developed in time – to find the most effective ways of promoting rights on the platform.
Several of Facebook’s more nuanced, more proportionate solutions to content challenges recognize this reality: applying warning interstitials to many types of graphic content rather than removing it, working with third-party fact checking partners to label and debunk misinformation, and leveraging automation to reduce the visibility of content that is assessed as highly likely to violate our policies even before it is reported or removed. (See more here).
As Nick Clegg, our VP of Global Policy, recognizes in his recent article, “Companies like Facebook need to be frank about how the relationship between you and their major algorithms really works. And they need to give you more control.” And, as we move to implement our Corporate Human Rights Policy predictably and consistently, we will continue to engage and experiment with a wide range of options from deleting content to downranking it to providing people with additional context.
How will Facebook handle situations where there is a conflict with a binding human rights obligation of a State- which may differ depending on where a post has been uploaded? In other words, are the principles going to be applied uniformly when considering international human rights standards- or does the international law that applies in a jurisdiction meant that Facebook will treat similar situations differently depending on the jurisdiction the content emanates?
It’s almost impossible to answer this kind of hypothetical. There’s a huge body of relevant law, huge conflict of laws, places where we have binding legal obligations, and places we don’t. That just underscores that the global approach and coherence of the UNGPs and related global human rights standards are a welcome guide.
Let’s put it this way, the problem of states acting to uphold human rights principles isn’t often a problem for us; where they do so, we’re usually in alignment on policy, and need to work on enforcement in practice.
So it’s usually not the obligations that are a problem, but the interpretation of those obligations and coherence/incoherence with other bodies of law.
Either way, these are situations where our human rights policy, and good timely human rights due diligence, can guide decisions in each case.
It’s also important to know our content policies are global for several reasons, two among them a) the internet should be open and global and b) that’s been the only practical way of developing policy and operations that can maximally align us with relevant global human rights norms, including expression, information, political participation, and peaceful assembly.
Facebook has stated that it will provide remedies for its decisions, primarily through the Oversight Board, which will address complaints about removal or nonremoval of content. These remedies address violations of freedom of expression. Will Facebook consider other types of remedies (restitution, compensation, rehabilitation, satisfaction and/ or guarantees of non-repetition), and remedies for other breaches of its policy, based on principles flowing from the main human rights instruments that FB has committed to respect, all of which contain provisions on remedies (i.e., the UNGPs, and the UN’s Basic Principles and Guidelines on the Right to a remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law?
The UNGPS are obviously key for company work – and under the protect/respect/remedy framework, the key company expectation is the creation of operational grievance mechanisms.
The Oversight Board is obviously an ambitious experiment in remedy and as a powerful operational grievance mechanism – while some have been critical of what they see as its limited mandate, it has shown in its early decisions and recommendations a willingness and desire to address a broad range of rights issues implicated by the platform.
As you can see from the human rights due diligence we helped facilitate for the Oversight Board (it’s available publicly here), there’s a detailed discussion of a wide range of human rights principles, and I was excited to see it given my own earlier work with the International Center for Transitional Justice.
I hope you can see elements of remedy in a wide range of our work – satisfaction and acknowledgement in the Sri Lanka HRIA release; access to remedy plus restitution in the appeals and Oversight Board process; measures for non-repetition in our policy and product development work; centrality of rights holders and survivors in our due diligence, policy development and co-design work.
Among the list of standards you have cited as sources, the only regional treaty Facebook lists is the American Convention. Why is that (perhaps because Facebook is US-based, but then the US is not a party to that Convention)? Will Facebook also follow the European Convention, the African Charter, and the Arab Charter, since there are legal obligations in respect of the States in those countries where they apply?
As a global company, we have binding legal obligations in the countries where we’re incorporated. That’s part of the fundamental reality of the internet – it’s a reality that, by and large, has served human rights well–although it’s a reality that’s also under great stress.
For example, one particularly difficult example is our obligation, as a U.S. company, to comply with U.S. sanctions regulations (which vary by circumstance) in all the jurisdictions where we operate.
Sanctions regulations applicable to Facebook, include those administered by the U.S. Dept. of State (e.g. the Foreign Terrorist Organization List) and the U.S. Dept of Treasury’s Office of Foreign Asset Controls. Violations of sanctions regulations can result in significant civil and criminal penalties.
We believe in supporting voice as people share and debate, and so have teams with language, regional, and legal expertise reviewing accounts and content against our Community Standards (including our dangerous organizations policies) and applicable U.S. sanctions laws. We seek to maximize freedom of expression while also complying with the law.
As a global company, and as a set of services linking users around the globe, we also need global guidance. That’s why our focus was to anchor Facebook in the key global principles: the UN Guiding Principles on Business and Human Rights, but ensure that anchor is also moulded by a wide range of respected human rights standards, including but not limited to the American Convention.
Why does Facebook invoke the Charter of Fundamental Rights of the European Union, which is not really a treaty, but omit the European Convention on Human Rights, which has the most evolved (and binding) jurisprudence of any human rights treaty?
A typo! We need to do some rapid iteration here (we always knew we’d need to iterate, that’s what distinguishes policy making in a tech environment from more static environments.)
Will Facebook’s Human Rights Policy be guided by important non-treaty instruments that could have been included, such as the Tshwane Principles- which are PACE endorsed, and contain standards on freedom of information in the national security context, and would be important guidance for ensuring that they resist State pressure and do not suppress information on spurious national security grounds?
We’ll be looking at all relevant guidance. There’s no shortage.
It’s important to note we’re not working in an academic context. We work with standards every day to inform actions, mitigations, and decisions. In general, there’s plenty of guidance to resist state pressure, but far fewer practical footholds for doing so. . . that’s why implementing our GNI commitments and human rights policy are so important.
Facebook commits to carrying out due diligence in terms of human rights and refers to several Human Rights Impact Assessments it has carried out (mostly in Asia). Have these studies been updated to examine whether Facebook’s impact has improved? Are there newer studies being carried out?
Yes, we’ve expanded our due diligence work, and are updating others. It’s very obvious that a one and done model of due diligence – a project that can take more than a year to complete – can be useful in some instances, but many of us (including tech companies) need to develop other and faster tools for human rights risk management in real time, and in complex systems.
Right now we’re experimenting with product due diligence, decision due diligence, as well as prioritization models and other kinds of frameworks. We’re trying to use the great parts of agile software development processes – user research, data driven decision-making, swift escalation procedures – to help make our due diligence count. That’s in addition to the legal and operational procedures we have around our commitments as a Global Network Initiative member, and that you’ll see in our transparency reporting.
Facebook has committed itself to protecting vulnerable or marginalized groups, and to embracing the concept of non-discrimination. Can Facebook clarify the ‘prohibited grounds’ it will consider as subject to discrimination, or particularly prone to marginalization?
This is especially important in respect of contested grounds: for instance, caste/descent, LGBTI+ identities, citizenship, etc.
So here the most relevant existing body of work, at least in the content realm, are Facebook’s Community Standards.
They include an extensive list of protected characteristics, including race, ethnicity, national origin, disability, religious affiliation, caste, sex, sexual orientation and gender identity and serious disease. We consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks.
We have similar rules in our advertising policies, as well as additional guidelines on discriminatory targeting criteria. Our Responsible AI team is also deeply invested in work on fairness and non-discrimination within our products.
Under our human rights policy and our new civil rights team, established as a result of the civil rights audit that was finalized in 2020, our work in support of – and in collaboration with – vulnerable or marginalized groups will expand, and I’m very excited to see that team forming in our legal org under the leadership of Vice President Roy Austin. The challenge is not only to prevent hate speech, but to ensure we do the due diligence to consider the impact of our policies and products on vulnerable and marginalized groups. We’re working at a scale when almost any decision has multiple competing rights aspects, so working to define and prioritize the most salient risks is also really tough but hugely important.
Are complaints about the application (or nonapplication) of the human rights policy subject to review by the Oversight Board, or any other appeal mechanism?
So first, it’s important to note the policy is linked to our Facebook code of conduct, and also to our various whistleblower and complaints mechanisms. And second, there are several areas of included by the policy that may have their own grievance procedures.
But at the broadest level: no. I’d be very interested in learning of any business and human rights policy that has such a mechanism in addition to an operational grievance mechanism. A great and perhaps influential research topic!
The Oversight Board is fundamentally about content, and our human rights policy is far broader. The Board issues content-related decisions, which are binding – but that said, it can also choose to make other policy recommendations, and we can see they’re vigorously exercising that choice. In that case we’d have 30 days to respond indicating whether or not we accept the recommendation, and, if not, why.
The UNGPs are premised on the protect/respect/remedy framework, with states and businesses both playing a role in ensuring company respect for its human rights responsibilities.
An important step in the policy is periodic reporting to the Facebook board of directors and oversight by Nick Clegg (VP of Global Policy and Communications) and Jennifer Newstead, (VP Legal and General Counsel) two of our most senior leaders. And of course, the human rights annual report where we will share insights and actions from due diligence – as well as plans going forward.
We also expect our engagement with external stakeholders to help hold us accountable which is why we are investing so much time and effort in engaging with NGOs, civil society groups, UN actors, advertisers, investors and so many others. Mind you, whatever we do, there’s a lot more we should do – I want to be clear, humble, and say we are really at the beginning of our work.
Will Facebook support the development of international law more directly applicable to its conduct, such as a potential international treaty to regulate hate speech, or more broadly, the treaty on Business and Human Rights?
Our priority now is to seek to make sure that we live up to our responsibilities to the UNGPs and to human rights law standards – as well as soft law such as the Rabat Principles – as set out in our policy. That’s a huge challenge. But this is a dynamic space for human rights. We recognize that international and regional bodies may propose new treaties on issues of relevance to social media platforms and their impact on human rights. If so, we’ll seek to engage constructively to respect and support the rights of users.
If those were the only international legal developments applicable to tech, freedom of expression, it’d be a walk in the park. We have hundreds of governments creating and implementing regulatory frameworks, many of them pernicious and incredibly complex. It’s a moment where it would be incredibly useful to focus on protecting and reinforcing principles we already have, while also building towards a different future.
We know that Facebook functionalities can be used to attack and harass human rights activists, and we know Facebook is also working hard to address many cases. How can activists who’ve been trolled or defamed on Facebook seek and gain redress?
As part of our commitment to human rights defenders defined in the policy, we are seeking – through our policies and their enforcement – to ensure that defenders who are harassed or threatened have recourse to self-remedy, operational support, and also administrative remedy. We also want to support defender communities through digital security training and other efforts, such as our new pilot defenders fund.
A good example of a tool for users to prevent or mitigate bullying and harassment is the comment control feature we launched on March 31. You can now control which audiences can comment on one of your posts, or on any post from your public profile. That’s an important innovation – a great deal of bullying and harassment happens through the comments function. We offer block and reporting functions in our private messaging systems, reporting functions in Facebook and IG, and have detailed and rapidly evolving bullying and harassment policies, including gender-specific policies. We enforce against users who engage in such tactics, depending on their severity, up to and including account disablement. But it’s clearly a very pronounced part of online (and perhaps universal) human behaviors.
As to judicial redress for defamation or harassment – that is of course part of the respect/protect/remedy framework, where it’d be ideal for defenders to be able to seek judicial remedy against the perpetrators in addition to whatever administrative measures Facebook can provide, which are basically limited to account related actions. In general, the research our team has done indicates this seems rarely the case – although I’m looking forward to reading these recent proposals from the Ontario Law Reform Commission.
Does Facebook have a clear policy in cooperating with international investigative mechanisms increasingly called upon to preserve and share evidence for possible prosecution by international, regional, and national tribunals (e.g., UN-sponsored mechanisms documenting violations in Syria, Myanmar, Venezuela, North Korea, Yemen, South Sudan)? Will Facebook preserve data for accountability purposes in situations where it is or should be aware of serious human rights violations?
This is important. Actions speak louder than words.
Our policies are routinely scoped to keep carve-outs for human rights documentation, and we’ve begun to lawfully voluntarily disclose data to the Investigative Mechanism for Myanmar, mindful of its mandate to support other justice processes. We routinely preserve data and assess disclosure requests from national law enforcement bodies in a manner consistent with our commitments as a GNI member (guidelines here), and have tried to experiment with supporting real time monitoring and documentation needs with Crowdtangle.
So there’s movement. And there’s the basis for some excellent work if one could involve rights-respecting national law enforcement bodies. But is that where folks want us to be? No. We’ve been exploring these issues with groups and universities who are active in this space. There are many different approaches to relevant use cases – some folks are interested in a broad documentation approach; others on the impact of automated enforcement on graphic violence material; and still others are more focused on criminal evidence, for example. This is definitely a good example of where we should explore developing policy (and practice) that’s also respectful of national and EU privacy laws.
How are you applying your Human Rights Policy on other Facebook-owned platforms, such as Instagram and WhatsApp?
Big question. The Facebook Corporate Human Rights Policy covers Facebook, WhatsApp, Instagram, Oculus and all other corporate entities that are part of Facebook Inc. So the first answer is – we’re applying it across the entire company.
The second answer: we’re going to have to build our implementation strategy working towards an ideal goal, but using a capability maturity model or similar tool to guide us there. Otherwise the challenge might be too overwhelming.
Third, we’re going to have to build on the very strong interest of the policy’s leaders to ensure we offer them, and the board, the best possible reporting and oversight mechanisms. Identifying salient risks is perhaps the easiest and most obvious step: working out how to actually prevent and mitigate them is far harder! Finally, we’re hiring. The one thing that holds true across my career in human rights, social impact, and tech is that the most powerful way to build something great is to recruit and motivate a great team. So if you care about implementation, please consider working with us to implement! The challenge is immense – but so are the opportunities.