Skip to content

Exit, Voice, and the First Amendment Treatment of Social Media

PUBLISHED

Robert Post is a Sterling Professor of Law at Yale Law School. Disclosure: Post is on the Board of Trustees tasked with ensuring that Facebook's Oversight Board remains independent and effective at achieving its stated purpose.

This post is a response to a previous post by Genevieve Lakier and Nelson Tebbe.

I.

The rich and useful essay by Genevieve Lakier and Nelson Tebbe is certainly correct to conclude that the First Amendment, as presently interpreted, is “poorly equipped” to respond to “the threats to freedom of speech that result from private control of the mass public sphere.”

I confess, however, that I am confused about whether the public function doctrine of Marsh v. Alabama is a helpful lens through which to conceptualize “the great deplatforming.”  Marsh holds that in certain circumstances the Constitution will prevent restrictions on First Amendment freedoms, even if these restrictions are imposed by private parties.

As Lakier and Tebbe are well aware, First Amendment doctrine is now interpreted by courts in a ferociously libertarian and deregulatory way. Were Marsh to apply to entities like Facebook or Twitter, First Amendment doctrine would almost certainly invalidate even the minimal content moderation policies that these social media platforms currently deploy. As a consequence the platforms would most likely be overrun with all forms of deplorable speech; they would be inundated with endlessly abusive communications and with misinformation of all kinds.

Lakier and Tebbe know this, and they therefore seek sharply to pivot to explain that their ultimate end is actually “to imagine a world in which legislatures might be constitutionally authorized, even constitutionally required, to ensure that users of the platforms enjoy minimal due process protections against removal, or to require some degree of transparency by the social media companies” or “to take other steps to ensure the vitality of speech in the digital public sphere.” It is not clear to me, however, how this turn toward legislative power can solve the problem of atrocious speech on large social media platforms.

In the world imagined by Lakier and Tebbe, platforms like Facebook or Twitter are not themselves constitutionally protected speakers. The only constitutionally relevant speakers are persons who use these platforms. Essentially, therefore, Lakier and Tebbe invite us to characterize platforms as common carriers, who, like the telephone company, are vehicles through which persons speak.

This is an important conceptual shift, because in such circumstances Lakier and Tebbe are correct to conclude that legislative imposition of “minimal due process protections” would likely be constitutional. These protections would protect against arbitrary deplatforming. It is extremely unlikely that these protections would be invalidated as unconstitutional “takings” of the private property of public utilities.

Once we make this shift, however, legislative efforts to impose content moderation policies would almost certainly be unconstitutional under current doctrine. Such policies would be condemned as impermissible content or viewpoint discrimination. Just as Congress cannot now impose content moderation policies on common carriers like telephone companies, Congress could not under Lakier and Tebbe’s proposal impose such policies on platforms like Facebook and Twitter. We would thus be left with social media platforms that are constitutionally compelled to broadcast intolerable and oppressive forms of speech, because the First Amendment would protect such speech from legislative regulation.

Is that the vital digital public sphere envisioned by Lakier and Tebbe? If not, I remain confused about how their proposal advances the ball. If Lakier and Tebbe are concerned primarily to justify minimal due process protections against arbitrary deplatforming, they have offered a useful solution. But if the real problem they wish to solve is the potential for atrocious communications in the digital public sphere, their proposal may exacerbate the problem.

II.

I do agree, however, that it is important to understand when law ought to regard social media platforms as speakers and when law ought to regard them instead as vehicles for the transmission of third-party speech. Because Marsh is a blunt and clumsy tool for this purpose, I suggest that antitrust law may be better suited to the task. Indeed, properly considered, antitrust policy may have surprisingly significant constitutional implications.

 The question is how we ought constitutionally to characterize publishers who distribute third party speech. Examples would be newspapers, magazines, music companies, blogs, social media platforms, etc. Call these publishers “speech aggregators.”  In any given case, we can ask whether a speech aggregator is itself a speaker, or whether it is simply a passive vehicle for distributing the speech of third parties.

In many cases, speech aggregators define their own speech by suppressing speech they do not wish to distribute.  Thus magazines create their own editorial identity by deciding which articles to publish and which articles to reject.  Newspapers do the same. Record labels create their niche by determining which musicians to push and which to repudiate. The LPE Project expresses its unique identity by determining which posts to carry and which to bury. Social media platform create their own distinctive communities by determining (through content regulation) which expressions to distribute and which to suppress.

This implies that in many contexts, and for many speech aggregators, speaking and censoring are simply two sides of the same coin. To prevent some speech aggregators from engaging in content moderation is to prevent them from speaking at all. This point is typically captured in First Amendment doctrine under the heading of “editorial autonomy.”

Lakier and Tebbe invite us to ask when speech aggregators should constitutionally be regarded as common carriers.  No one, for example, imagines that The Atlantic is simply a common carrier passively distributing the authors whom it publishes. So what exactly is the difference between The Atlantic and giant social media platforms? Marsh does not help us ascertain the difference. A better tool might lie in Hirschman’s framework of voice and exit.

In a well-functioning market, the audience for a speech aggregator retains the possibility of exit. If she does not like The Atlantic’s choice of articles,she can cancel her subscription and sign up instead to receive The Boston Review. This is another way of saying that The Atlantic’s content discrimination is disciplined by the market.

From a political economy point of view, we might hypothesize that this kind of market discipline virtually defines a content aggregator as private, meaning that the content aggregator ought not to be subject to First Amendment restraints like the prohibition against content discrimination.  This would be true whether nor not The Atlantic is an influential or powerful speaker. So long as the audience for The Atlantic is free to exit and to follow other speech aggregators in the relevant market, the magazine ought to be characterized as a speaker and hence as exempt from the constitutional prohibition against content discrimination.

How should we characterize large social media platforms under this test? Do their audiences retain the possibility of exit? If users of Twitter do not like Twitter’s content moderation rules, can they leave and join a different social media platform with different rules? If there is not a well-functioning market, so that exit is not a real possibility, we may conclude that Twitter should constitutionally be categorized as the kind of common carrier suggested by Lakier and Tebbe. It would follow that Twitter would be subject to the procedural due process restrictions discussed by Lakier and Tebbe.

But were Twitter to be categorized as such a common carrier, it would also follow that Twitter would be prohibited from imposing content moderation policies. For reasons I have already suggested, this option would likely lead to the degeneration of content on Twitter. Twitter would become a Hyde Park, rapidly spiraling down into a condition of communicative rot.  The framework of antitrust law suggests that we can avoid this spiral by using legislative authority to ensure the possibility of meaningful exit.

Given the immense network effects that characterize digital platforms, and given the ways that the large social media platforms are virtually markets unto themselves, it is not clear to me how antitrust law can theoretically create well-functioning markets that would guarantee the possibility of meaningful exit. But assuming for the moment that this goal is achievable, antitrust law could be used to mark the constitutional boundaries between a digital world in which the application of content moderation policies to media platforms is unconstitutional, and a digital world in which competing platforms remain free to impose their own distinct content moderation policies, the distribution of which would be determined by market forces.