Elizabeth Warren and Josh Hawley might seem like unlikely allies. On the DW-Nominate ideological scale for economic issues, they represent opposite poles, with Warren further left than even Bernie Sanders and Hawley firmly on the far right. But on one economic issue they share a common stance: both Warren and Hawley are strong proponents of dismantling big tech.
Their reasons overlap too. Both are wary of giants like Amazon who operate marketplaces for third-party sellers while also peddling their own products. While Hawley voices standard rightist complaints about “censorship” by “woke” firms, Warren has also clashed with big tech over content moderation.
The Warren-Hawley stance (!!) has an appealing whiff of the Neo-Brandeisian approach to antitrust law, emphasizing the protection of democracy over mere consumer welfare as the goal of the competition regime. According to this story, because companies like Meta and Alphabet control too many of the channels for political speech, if their content moderation policies incorporate partisan bias or self-interest, it damages the autonomy of the democratic public.
This narrative points to an important underlying problem. Yet I’m not entirely convinced by the proposed remedy. Antitrust breakups work best when there’s a clear conflict between public and company interests. Consider privacy: we have an interest in not having our personal data abused to predict our behavior and sell us things. The companies have an interest in hoovering up more data. Antitrust breakups (combined with other regulations, like restricting data transfer between companies) could help shift the balance of power in favor of users and away from the companies.
However, company and public interests often converge. Consider the malicious political disinformation propagated by the Russian “Internet Research Agency” or the hate-filled Burmese military propaganda on Facebook that fueled the ethnic cleansing of the Rohingya. Companies have an interest in keeping that horrible content away—no company wants to be known as the place where people go to be radicalized into a murderous misogynist or convinced to take a horse-dewormer to cure covid.
The blunt instrument of “break up big tech” could actually exacerbate those issues. Bigger, more established, companies have better tools to do content moderation more effectively. They have sophisticated and experienced teams of policy personnel, large datasets with which to train machine learning models, and the resources necessary to engage in global content moderation in numerous languages. Moreover, big companies that serve mass markets have incentives to adopt content moderation policies that are broadly acceptable to the public. Not only do big platforms need to maintain products that are palatable to mass-market consumers, but the kinds of big advertisers that provide the revenue necessary to run a big social media company tend not to appreciate their logos being plastered next to white nationalist videos. For example, YouTube suffered advertiser boycotts twice when companies like Nestle and Disney in 2019, and Pepsi and Starbucks in 2017 discovered that the company placed advertisements alongside hate speech and pedophilia.
By contrast, small companies serving niche audiences lack such incentives. Parler and Truth Social don’t serve mass-market audiences and presumably don’t need to please Pepsi to keep the lights on. For instance, before January 6, Amazon begged Parler to do something about the violent threats appearing on its platform, but Parler took no action.
Existing proposals to break up platforms from both the left and the right do not adequately take into account the way that we can recruit company interests to serve the public interest.
In Favor of Platform Democracy
In my new book, The Networked Leviathan: For Democratic Platforms, I argue for keeping tech companies intact but improving their alignment with the global public interest by integrating the public directly into their governance systems. Essentially, I’m proposing a kind of social democracy for platforms. By creating institutions that allow the public to exercise direct authority over the rules and enforcement processes of major technology companies, we can better ensure that those rules and processes are conducted not only in our interests but also effectively.
This case rests on two big ideas.
The first is a knowledge gap. Often, American companies lack a good way to learn about what they’re trying to control, especially when operating in a global context. They’re often culturally clueless or just underinvest in ways to learn about what is happening in countries that lack political or economic leverage over their leaders.
For example, in Myanmar, Facebook lacked any effective way for local people to alert the company about the genocidal incitement being spread on the platform.
Democratic shared governance could give ordinary people both the power and the incentive to share their local knowledge with company personnel. Imagine how things might have gone differently in Myanmar if a group of ordinary Burmese people—including Rohingya people—had the capacity to hit an emergency brake on content distribution in their country, and hence to force Facebook personnel to attend to the fact that a critical mass of people perceived a crisis and to respond to their concerns?
The second central idea is that of short vs. long-term interests. Company leaders often face a conflict between the long-term objective of maintaining a healthy platform ecosystem and the lure of short-term profits. The public has an interest in helping companies follow their long-term interests.
For example, both Warren and Hawley criticize Amazon’s practice of selling products that compete with its “marketplace” sellers. However, Amazon actually had policies to mitigate the risk of abuse, like prohibiting its private label employees from using marketplace data to compete with third-party sellers. Their employees simply broke those policies. The tension here is easy to identify. On the one hand, the company has a long-term incentive to give sellers protections that enable them to use the marketplace. No one wants to sell their products on a platform that will use their sales data against them. On the other hand, private-label employees have a short-term incentive to juice their own sales figures to look successful to the boss. This conflict caused Amazon’s internal rules to fail.
More troublingly, pressure from powerful political figures can create dangerous short-term incentives in social media. During the Trump years, Facebook executives protected Breitbart News against the company’s misinformation policies because, in their words, “do you want to start a fight with Steve Bannon?” The pressures in other countries, where officials can lock up company personnel or ban company products, can be even more severe.
I argue that ordinary people, who don’t have an economic interest in the company’s bottom line, are better situated to help companies resist these temptations. Imagine how much more resilient Facebook might have been against Steve Bannon’s bullying if the “whitelist” of who was exempt from misinformation policies was subject to the scrutiny not just of corporate lobbyists but also of citizen juries.
Platform Socialism?
That being said, there is still a place for the government in reining in big tech. Ultimately, while I think that empowering ordinary people would be in the best interests of technology companies and their stockholders, corporate leaders might not see it that way. After all, one of the most prominent big tech platforms is currently owned by an erratic billionaire with an emotional regulation problem who doesn’t seem to respond to economic incentives at all. One of the reasons that “every billionaire is a policy failure” is that at a certain level of wealth, people stop responding even to the good kinds of economic incentives, and it becomes possible to blow 44 billion dollars to engage in a kind of weird alt-right free speech absolutist performance art.
Accordingly, the governments of the world should use their regulatory leverage to encourage technology companies to move closer to platform democracy. For example, governments could radically expand and diversify the groups of people who get to participate in company decisions by forcing them to treat their legions of offshored content moderator workers as real employees with a say in what their workplaces produce. They could borrow a page from Sarbanes–Oxley and require companies to create internal firewalls between the employees that do lobbying and the employees that determine content policy, en route to injecting the rest of us into the latter function. Potentially, antitrust law could do some work here too, albeit in a supporting rather than a primary role. No less a traditional antitrust scholar than Herb Hovenkamp has suggested that antitrust law could be deployed not so much to break up big tech companies, but to reorganize their management structures to enhance their accountability.
At the end of the day, genuinely empowering ordinary people could do more than just make the companies better at doing work like content moderation where their interests and ours are aligned. It could also force the companies to change their self-understandings of where their interests lie. Empowered ordinary people could, for example, help pressure companies into finding business models that don’t rely on surveillance advertising—and they might do so from the inside. For this reason, the terminus of platform democracy might just be a kind of platform socialism. That sounds pretty good to me.