The Best Way to Regulate Disinformation

The Best Way to Regulate Disinformation

[David L. Sloss is the John A. & Elizabeth H. Sutro Professor of Law at Santa Clara University. He is currently writing a book about information warfare and social media, to be published by Stanford University Press.]

Disinformation on social media poses a threat to liberal democracies around the world. Recent decisions by Twitter and Facebook to limit distribution of a New York Post article about Hunter Biden have sparked a renewed controversy about how best to regulate disinformation on social media. Americans rightfully mistrust the federal government to be the arbiter of what is true and false. However, there are equally good reasons to question the institutional competence of social media platforms to be the ultimate judges of truth and falsehood.

We need a third option. The best approach is for Congress to enact a law creating a separation of powers system that divides power among the government, social media companies, and a publicly funded, non-partisan, nonprofit organization. (The Open Technology Fund is one example of such a nonprofit organization.) The law should be designed to target bad actors, in addition to false content, because a small number of bad actors are responsible for a large proportion of the false and misleading content that pollutes the information environment on social media. Alex Jones is one of the most notorious examples of prolific information polluters.

The basic division of power should be as follows. The nonprofit organization should be entrusted with discriminating between truth and falsehood. Social media companies should be responsible for issuing warnings to users who disseminate information that the nonprofit group has labeled as false. The companies should also be responsible for identifying “persistent offenders”—those who continue to disseminate disinformation after they have been warned. The government should be empowered to order the temporary suspension of persistent offenders from social media platforms, but only after they have repeatedly flouted warnings from social media companies.

Consider each piece of the triad separately. The non-profit organization should be staffed by professional journalists and should operate in compliance with the principles established by the International Fact-Checking Network. The Swedish organization Faktiskt—which is operated by independent media organizations but funded in part by the Swedish government—provides one helpful model of such an organization. Guaranteed government funding is important to ensure that the nonprofit group does not have to spend time on fundraising and that it is not beholden to any private donors.

At least initially, the nonprofit group should have a mandate to conduct fact-checking in only two substantive areas: scientific misinformation and election-related misinformation. In the long run, the group’s mandate could potentially expand to other areas, but these are the two areas that have generated the greatest controversies and they are arguably the two areas where disinformation poses the greatest threat to liberal democracies. The definition of “electioneering communication” in 52 U.S.C. § 30104(f) provides a helpful starting point to determine what “election-related” information is covered, but the definition should be modified to address social media. Both social media companies and relevant government officials could refer suspected cases of disinformation to the nonprofit group for fact-checking.

The nonprofit organization should alert relevant social media companies and the Federal Communications Commission (FCC) whenever fact-checkers determine that a particular story circulating on social media is false or misleading. The companies should be legally obligated to issue warnings to users who are spreading that story on their platforms. That obligation should be subject to a de minimis threshold so that users who share the story with fewer than three hundred people, for example, would not be subject to mandatory warnings. Companies should also be legally obligated to maintain a record of all the warnings they issue so that they know how many warnings they have issued to a particular user.

The law should establish a definition of “persistent offender.” The simplest approach would be to define the term with respect to the number of warnings a person receives from a particular company in a specific time period. Under this approach, for example, any person who receives more than three warnings in a month from Twitter would be labeled a persistent offender. The law should obligate social media companies to suspend persistent offenders from their platforms. A first-time persistent offender might be subject to a one-year suspension; a second-time offender could be subject to a three-year suspension; and a third-time offender could be subject to a five-year suspension. The specific numbers are less important than the general idea: repeat persistent offenders would be subject to progressively longer suspensions.

The law should empower the FCC to monitor social media companies to ensure that they implement their obligations to issue warnings and suspend persistent offenders. To enable the FCC to perform this function, the law should require social media companies to file monthly reports with the FCC containing information about the identities of people who have received warnings, the number of warnings for each person, and the identities of any people who have been suspended. The law should also empower the FCC to impose fines on companies that violate their legal obligations to issue warnings and suspend persistent offenders.

One potential loophole in the proposed regulatory system is that persistent offenders could evade suspension by creating fictitious identities to open new accounts. To block this circumvention strategy, the law should obligate social media companies to require customers to provide their real names and certain other identifying information when they open new accounts. (I refer to this information as “account registration information.”) Social media users should have a legal right to use pseudonyms for communication on social media, but they should still be required to provide their real names at the account creation stage. These two requirements—the obligation to provide a real name to create an account and the right to use a pseudonym when operating that account—are entirely compatible with each other, and with First Amendment protections for anonymous speech.

The law should require companies to share account registration information with the FBI so that the FBI can confirm that “John Doe,” for example, is a real person, and not a fictitious identity created to circumvent the system. The requirement to share information with the FBI is important because operational collaboration between the government and social media companies is the best way to distinguish between real persons and fictitious users. That collaboration is already happening on an informal basis.

Privacy advocates may object that a legal requirement for social media companies to share account registration information with the FBI would lead to a massive invasion of user privacy. That objection is based on a failure to recognize the extent to which such information sharing is already happening. The statute and implementing regulations should include rigorous safeguards for both data protection and informational privacy to prevent the government from exploiting the system to spy on innocent social media users. Given the informal collaboration that is already happening between the government and social media companies, codification of that process in a statute and regulations—including rigorous safeguards for data protection and informational privacy—could actually reduce the risk that information-sharing between companies and government officials will infringe individual privacy rights.

Civil libertarians may also object that the system I am proposing penalizes people for exercising their constitutionally protected First Amendment rights. The Supreme Court decision in United States v. Alvarez provides some support for this position. Alvarez affirms that content-based restrictions on false speech are subject to strict scrutiny. However, my proposal is distinguishable from the law at issue in Alvarez because that law involved criminal penalties, whereas the proposed rules merely involve temporary suspension of the privilege to operate a social media account.

The Supreme Court decision in Holder v. Humanitarian Law Project provides a road map for how Congress could design a law to survive strict scrutiny. When applying strict scrutiny in Holder, the Court manifested substantial deference to factual findings by the political branches on empirical questions. Thus, to help ensure the law’s validity, Congress should make explicit findings that: 1) dissemination of false and misleading information on social media poses a significant threat to the integrity of our democracy; and that 2) temporary suspension of social media privileges is the least restrictive means to counter that threat. If Congress supports these findings with materials in the legislative record, the Supreme Court would probably conclude that the legislation satisfies the stringent requirements of strict scrutiny.

The problem of disinformation on social media highlights one of the great ironies of the current information age. “Eternal vigilance is the price of liberty.” The protection for individual freedom that is a core liberal norm cannot exist without properly functioning democratic systems. Unrestricted free speech on social media threatens to undermine the integrity of our democratic system, which is the foundation for robust protection of individual freedom. Therefore, ironically, we must adopt seemingly illiberal policies—namely, restrictions on free speech—to protect and promote the liberal commitment to individual freedom.

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, Organizations, Technology
No Comments

Sorry, the comment form is closed at this time.