Do Not Trust Facebook to Enforce Human Rights

Do Not Trust Facebook to Enforce Human Rights

[Neema Hakim is a third-year law student at the University of Chicago Law School, Editor-in-Chief of the Chicago Journal of International Law, and a 2021 Salzburg Cutler Fellow.]

Photo credit: European Civil Protection and Humanitarian Aid Operations

Last week, Facebook released a “corporate human rights policy,” eight years after learning that its platform was being used in Myanmar to spread hatred which ultimately culminated in the violent displacement of 750,000 Rohingya Muslims. In its announcement, the company recycled a common talking point on its past involvement in human rights abuses. Facebook’s Human Rights Director wrote,

“There is no question that we, and other social media and tech companies, have been slow to recognize and address their adverse human rights impacts.”

That is a gross understatement.

Facebook is not a passive actor. It executed a years-long effort to promote the global adoption of its technology, devoted few resources to monitoring its social effects, and ignored numerous warnings from human rights activists. In this post, I focus specifically on Facebook’s role in the persecution of the Rohingya in Myanmar, an especially egregious—but nowhere near exhaustive—example of how the company contributed to serious human rights abuses and potential international crimes. Facebook’s human rights failures are attributable to economic incentives that will not change as a result of one policy statement. The only way to hold the company accountable is through regulatory schemes that increase the cost of contributing to human rights violations.  

Facebook’s Role in the Persecution of the Rohingya

On August 21, 2013, Facebook CEO Mark Zuckerberg announced the launch of “Internet.org,” an initiative to bring its platform to developing nations. Mr. Zuckerberg made repeated statements, including before the United Nations, that Internet access—to be provided in large part by Facebook—was a human right.

Facebook gained prominence in Myanmar sometime around 2013, launching a country-specific version of its app in 2015. The company then partnered with telecom providers to offer inexpensive access to the Internet, crafting the technology to steer consumers to its social media network. This arrangement drove tens of millions of people to become active users of the platform. And as a beneficiary of network effects, entry barriers for other platforms were high, all but ensuring the company’s continued dominance in the region. In a 2018 report, the U.N. Independent International Fact-Finding Mission on Myanmar (“IIFFMM”) noted that Facebook became the dominant

“mode of communication among the public and a regularly used tool for the Myanmar authorities to reach the public.” The IIFFMM further concluded, “For many people, Facebook is the main, if not only, platform for online news and for using the Internet more broadly.”

In addition to being the only way people accessed the Internet, Facebook offered the Myanmar military and influential Buddhist nationalists a sophisticated tool to wage a systematic campaign of hate speech and disinformation. The company grew its user base while bad actors used the platform to gradually justify violence against the Rohingya, doing so over a period of years.

Meanwhile, in early 2015, there were only two Burmese-speakers at Facebook reviewing potentially harmful content for a country of some 50 million people. Facebook’s automated systems failed to fill in the gap. In a 2018 special report, Reuters provides a striking example. One post stated in Burmese,

“Kill all the [anti-Muslim slur] that you see in Myanmar; none of them should be left alive.” Facebook translated this post into English as, “I shouldn’t have a rainbow in Myanmar.”

Facebook cannot claim ignorance as to the situation in Myanmar. The company received numerous warnings from human rights activists and researchers as early as 2013. They were ignored. One individual even gave a presentation at Facebook’s headquarters in Menlo Park. Recounting the interaction, he said,

“It couldn’t have been presented to them more clearly, and they didn’t take the necessary steps.”

As Facebook continued its operations, the nationalist campaign reached its logical conclusion. In August 2017, the Myanmar military announced “clearance operations,” launching an all-out assault on the ethnic minority—razing entire villages, perpetrating mass rape, killing thousands, and displacing some 750,000 people.

In evaluating Facebook’s responsibility, Marzuki Darusman, chairman of the IIFFMM, said the company

“substantively contributed to the level of acrimony and dissension and conflict . . . within the public.” And while Facebook praised itself last week for “voluntarily” disclosing information to the U.N. on Myanmar, in 2018 the IIFFMM reported, “The Mission regrets that Facebook has been unable to provide country-specific data about the spread of hate speech on its platform.”

To date, it is unclear whether any national government has been able to access complete information from Facebook regarding its involvement in Myanmar.

Facebook once said that it was a “myth” that Internet.org was about company growth or new revenue opportunities. But in 2018, the same year the IIFFMM noted Facebook’s connection to ethnic violence in Myanmar, Mr. Zuckerberg pitched Internet.org as a success story on an earnings call, presumably to communicate the company’s strong financial trajectory. And if there remains any doubt as to business motive, Facebook’s own Chief Financial Officer has characterized Internet.org as a “long-term growth area” to investors.

Facebook’s role in Myanmar is not anomalous. It is just one example of how the company invests first and asks questions later, apologizing as needed to manage the latest crisis as a public relations problem. As recently as last September, former Facebook employee Sophie Zhang, who worked on countering disinformation, told The New York Times that executives either ignored or were slow to react to her warnings. According to Ms. Zhang, the company was

largely motivated by PR,” and “the civic aspect was discounted because of its small volume, its disproportionate impact ignored.”

Facebook’s Economic Calculation on Human Rights

Ms. Zhang’s remarks reflect a fundamental problem with Facebook’s new “corporate human rights policy.” The policy masks the true priorities at Menlo Park. To honestly consider whether Facebook can effectively enforce human rights, we have to address the deeper question of what ultimately drives the conduct of American public companies.

About 50 years ago, Milton Friedman proclaimed,

“The social responsibility of business is to increase its profits.”

In the following decades, this principle became the cornerstone of American corporate governance. While some commentators suggest that public companies should prioritize constituencies other than their shareholders, it is far from clear that those companies have departed from profit-maximization in any meaningful way. A complex combination of activist hedge funds, executive compensation, quarterly earnings reporting, and mythology around fiduciary duties drives public companies to prioritize profits over human rights. To grossly simplify the issue, shareholder primacy enables businesses to acquire more capital, sustain operations, and expand to new frontiers, endeavors which often result in real financial, but not social, benefits.

Facebook’s record in Myanmar is illustrative. It is not difficult to understand how Internet.org was squarely within the company’s self-interest. Under Facebook’s business model, the company provides “free” access to its platform to drive up consumption. As its users become active, Facebook obtains more data to leverage into targeted ads. These ads are tailored to audiences with a degree of specificity that cannot be achieved through conventional methods like billboards or television. The more users, the more ad revenue. Thus, Facebook has an economic incentive to dominate the social media markets in developing nations.

Against the foregoing economic benefits, what pressure has Facebook faced on the human rights side of the equation? To the extent the company had a coordinated response to the circumstances in Myanmar, it picked up around 2018, when the U.N.’s investigation began to generate bad press for the company. This was long after researchers and activists first alerted the company to the corrosive effects of its platform. Notwithstanding the creation of reputational risk, the U.N. had limited success in persuading Facebook to divulge complete information. Without a serious threat of liability, adverse human rights impacts are reduced to this reputational cost—easily managed by a savvy communications staff, often hired from respected political circles.

Ultimately, Facebook is balancing a familiar tradeoff between more capital, on the one hand, and social costs, on the other. The problem is that the company is not forced to internalize those social costs. Nothing about Facebook’s corporate human rights policy alters this fundamental dynamic. And while it may be possible to incrementally change corporate behavior through public pressure, the extent of the change will never be sufficient to address the gravity of the harm.  

Human Rights Enforcement at a Crossroads

Individuals concerned with the success or failure of human rights stand at a crossroads. We can continue to try and persuade companies to voluntarily change their behavior, using whatever channels available to us to marginally drive up the cost of noncompliance. Or, we can call on nations to construct novel regimes that radically increase the cost of committing human rights violations.

The Myanmar episode suggests that Facebook evaded accountability in large part because there were no entities with robust legal authority to set enforceable human rights standards and to investigate the company’s conduct according to those standards. To that end, transnational programs to stamp out corruption, child labor, and human trafficking may offer a basic blueprint for success. Some states have already begun taking steps in the realm of human rights due diligence. Moreover, because we are dealing with speech environments, governments must exercise extreme caution—building policies that balance the freedom of expression with the protection of the most marginalized members of society. National efforts to use Facebook’s record as a pretext to commit further human rights violations ought to be summarily condemned.

At the international level, Facebook is correct that human rights treaties primarily impose obligations on states, but international criminal law is not so restrained. While corporations themselves cannot be subjected to criminal liability under the Rome Statute, business executives have been punished for their complicity in international crimes. The International Criminal Court could further investigate the relevant legal issues and fashion industry standards that advise social media companies on the best practices to mitigate risk.

Pursuing true reform will be one of the greatest challenges of our generation, but it holds more promise for the future of human rights than a celebration of Facebook’s latest policy statement. When the drafters of the Universal Declaration of Human Rights outlined a “common standard of achievement for all peoples and all nations,” I suspect they omitted corporations for a reason. It is laudable that some officials at Facebook are trying to steer the company in a better direction. But the stakes are too high, and a corporate policy does not undo the past. We cannot trust Facebook to enforce human rights.

Print Friendly, PDF & Email
Topics
Business & Human Rights, Courts & Tribunals, Featured, General, International Criminal Law
No Comments

Sorry, the comment form is closed at this time.