Skip to content

Antimonopoly and Artificial Intelligence

PUBLISHED

Ganesh Sitaraman (@GaneshSitaraman) is the New York Alumni Chancellor's Chair in Law at Vanderbilt Law School and the director of the Vanderbilt Policy Accelerator for Political Economy and Regulation.

In the last few months, two major events in the world of artificial intelligence have raised the temperature in the AI arms race. The first was technical: the launch of Chinese firm Deepseek’s AI model. American AI giants like OpenAI, Microsoft, Meta, and Google were caught flat footed by a small Chinese firm which met or exceeded their AI performance with a fraction of the resources. The launch was even called a “Sputnik moment” for American AI. The second event was political: Vice President J.D. Vance gave a speech at the AI summit in Paris where he criticized “excessive regulation,” arguing that it prevents new entrants and thus stifles innovation.

Taken together, these events might lead policymakers to conclude that the best approach to governing AI is simply not to. That to help the biggest American AI companies compete internationally, we can’t slow them down. But the simple story of unregulated national champions fails to account for one of the critical drivers of long-term innovation: competition.

Talking about competition in AI is difficult because people have different definitions of AI. Rather than talk about “AI” in some general sense, we should focus on the supply chain that powers AI: the AI tech stack. To simplify slightly, the AI tech stack can be broken down into four layers. The first layer is hardware: the semiconductor chips needed to power the entire ecosystem. The second is cloud infrastructure, which consists of computers, servers, warehouses, and cables—the infrastructure needed to train and apply AI models. The third layer is the model layer, which includes data, model hubs, and the models themselves. The final layer is the applications; the way most consumers interact with AI via applications like ChatGPT and Claude. 

Perhaps most strikingly, as Tejas Narechania and I argue in a new article published in the Yale Law and Policy Review, some layers in the tech stack are already monopolies or oligopolies. At the hardware layer, there is only one firm in the world—the Dutch firm ASML—that manufactures the photolithography equipment necessary for making semiconductors. When it comes to chips, Taiwan Semiconductor Manufacturing Corporation (TSMC) is far and away the dominant player in chip manufacturing, and NVIDIA is the dominant chip designer. At the cloud layer, there is an oligopoly of three firms—AWS, Microsoft Azure, and Google Cloud Platform. The model layer is more competitive, with OpenAI, Meta, Google, and Anthropic leading the way, plus international models like Mistral. But even the newer firms are well-resourced juggernauts. The application layer is, at least at the moment, competitive with some of the same companies vying for customers, plus others that rely on their technology.

Monopolies and oligopolies come with significant downsides. Monopolies tend to charge higher prices and offer worse quality products or services. They are often less innovative because they don’t face competitive threats. Monopolies can also use their power to thwart future competition by purchasing potential rivals, engaging in predatory pricing, or designing products that lock in consumers. Perhaps most importantly, when monopolies provide critical infrastructure, they can leverage that power into downstream markets in ways that prevent competition, innovation, and new business investment. For example, a cloud infrastructure firm with its own models and applications could preference its own vertically integrated services over those of competitors. Oligopolistic markets can operate in similar ways.

Beyond economic consequences, monopolies and oligopolies also raise political and national security concerns. Firms like TSMC, with its monopoly over chip manufacturing, are a single point of failure for AI. This raises significant resilience concerns as a war, weather event, or other crisis could constrict chip supply. Powerful firms also regularly attempt to influence and wield power over policymakers, skewing both markets and government, to support their firms over others. They may also seek to prevent regulation and redistribution—thereby preventing the public’s preferences from taking effect.  

Addressing the challenge of monopolies and oligopolies at critical layers in the AI tech stack is thus important for competition, innovation, resilience, security, and representative democracy. While it may not map neatly onto the rhetoric of regulation versus deregulation, policymakers should apply antimonopoly tools to the AI sector. First, policymakers should work to ensure that industrial policy efforts—like public investment into semiconductors—do not entrench only the existing players in the hardware layer.

Second, policymakers could apply tools from the American tradition of networks, platforms, and utilities (NPU) law. These tools include adopting structural separations between layers in the AI tech stack, so firms are not vertically integrated; applying nondiscrimination and equal access rules to infrastructural resources so that small businesses aren’t charged higher prices or excluded from service; and establishing interoperability rules that make it easier for new entrants to use key resources and switch between firms.  

Third, policymakers should pursue public options for cloud infrastructure, so that the public sector isn’t dependent on private contracts with a small number of oligopoly providers. As has long been true in the defense context, a lack of competition over contracts can lead to higher prices, worse service, and delays in procurement. Indeed, countries around the world are already exploring different models for creating “public AI.” 

In Europe and the United States, the debate over AI regulation has largely been about safety, but the future of AI innovation and progress will also be determined by market structure. If policymakers let an “AI octopus” emerge, as Eric Posner has called it, it will likely strangle innovation, while simultaneously threatening economic resilience and representative democracy.