Skip to content

Seven Reactions to Biden’s Executive Order on Artificial Intelligence

PUBLISHED

John Mark Newman (@johnmarknewman) is Professor of Law at the University of Miami School of Law.

Veena Dubal (@veenadubal) is Professor of Law at the University of California, Irvine.

Salomé Viljoen (@salome_viljoen_) is Assistant Professor of Law at the University of Michigan Law School.

Ifeoma Ajunwa (@iajunwa) is the Asa Griggs Candler Professor of Law at Emory University School of Law.

Nikolas Guggenberger (@nikenberger) is Assistant Professor of Law at the University of Houston Law Center.

Elettra Bietti (@Elibietti) is Assistant Professor of Law and Computer Science at Northeastern University School of Law.

Jason Jackson (@JasonBJackson) is Associate Professor of Political Economy in the Department of Urban Studies and Planning at MIT.

JS Tan (@organizejs) is a PhD student in the Department of Urban Studies and Planning at MIT.

This past month, President Biden issued an executive order on artificial intelligence that addresses a wide array of concerns about the nascent technology: risks to national security, the use of deceptive AI-generated content, market concentration, and much else. What should we make of this hodgepodge of directives? Does it represent a meaningful step in the right direction, or is it merely whistling past the AI graveyard? To help us decide, and to begin to sort through the thicket of detail contained in the order, we asked seven experts to share their initial reactions.

John Mark Newman

Creative, smart regulatory proposals for how to “deal with” generative AI are suddenly everywhere. President Biden’s recent executive order on AI is among the better ones, and, given the pride of place it affords labor, the previous administration would never have issued anything like it. Biden’s order calls out labor unions as important stakeholders, mentions workers well over a dozen times, and even identifies collective bargaining as essential for developing prosocial AI applications.

But a fundamental, first-order question remains unanswered by the White House, regulators abroad, think tanks, and academics alike. Is it fair—or unfair—to copy massive amounts of creative outputs, use the labor of others without compensation or attribution to develop a generative-AI application, and then deploy that app to take attention and business away from the copied workers?

Widespread, indiscriminate taking for commercial use, without permission or payment, smacks of exploitation. An orthodox economist might call it “free riding” and a source of “market failure.” Historically, the legal system would have called it “unfair competition.” That’s exactly what the Supreme Court called it in a 1918 ruling that involved the International News Service’s widespread scraping and republishing of AP wire stories. INS was not the first case involving such conduct, nor was it the last. And although Erie subsequently abrogated INS (a federal common-law case), rich bodies of state caselaw continued to develop this as a sub-type of tort: “misappropriation.”

Misappropriation doctrine rests on a universally recognized moral proposition: it is wrong for the powerful to take and profit off others’ work without permission, payment, or attribution. Judicial opinions often use a biblical metaphor: misappropriators “reap where they have not sown.” And with new-wave federal and state regulators expressing an interest in fulfilling their statutory mandates to go after unfair methods of competition, such conduct may be ripe targets for law enforcement. While any such enforcement action would candidly entail a degree of litigation risk, this fundamental question—the fairness of the underlying business model—is well worth answering while we still have a chance.

Veena Dubal

President Biden promised to be the most pro-worker President in U.S. history. However, as his recent Executive Order on AI illustrates yet again, his administration has done little to grow direct workplace protections or to improve enforcement of existing employment and labor laws.

Since Biden took office in 2021, digitalized worker surveillance has skyrocketed. Prior to the pandemic, roughly 30% of large U.S. companies used some form of digital monitoring technology to track their employees. Two years later, fueled by companies’ growing desire to control remote workers and a booming market for surveillance tech, that figure has climbed to roughly 60%. The most common technologies track when workers log on and off, who they are communicating with, where they are going, what they are saying, and what content they are accessing and engaging on their computers. This data, extracted from workers’ labor, can then be fed into algorithms to determine their pay, to evaluate their work, to set quotas, to determine unionization risks, and to terminate them. The data can also be leveraged not only by a workers’ present employer, but also by future ones, impacting economic mobility over a lifetime.

These dangers are grave, and they are not hypothetical. Deeply engaged scholarship, journalism, and accounts of brave workers have made clear that the unmitigated use of digital technologies in the workplace—including AI—is already violating worker autonomy and dignity, as well as impeding their ability to organize and grow worker power across the economy.

And yet Biden’s AI Executive Order (which follows more than a year after a Blueprint for an AI Bill of Rights) does little more than acknowledge that bad things are happening and direct the federal agencies to develop guidelines, principles, and best practices to diminish these problems. Unlike in other areas, like national security and public health, the AI EO neither proposes nor calls for specific workplace regulations.

This omission is a sorely missed opportunity to take decisive action before such surveillance techniques become culturally normalized across sectors. In many instances, with regard to digitalized surveillance at work, there is no “right balance” to be struck. The harms are so unequivocal that much of workplace AI does not need legal safeguards or restrictions but—as the EU AI Act draft acknowledges with regard to biometric surveillance and emotional recognition software—affirmative prohibitions.

Salomé Viljoen

In terms funding independent, non-industry led AI research, there’s a lot to like about the Executive Order’s overall approach, which directs the NSF to implement a pilot program of the National AI Research Resource (NAIRR). NAIRR is meant to be a federal system that supports AI research outside the confines of private industry by equalizing access to computational resources, data, tools, and user support—and thus liberates such research from private imperatives of commercial value. 

As the recent snafu over OpenAI shows, ensuring the existence of independent resources for AI research and independent AI researcher pipelines is a worthy public aim. The former Board of OpenAI are the latest in a long lineage of Silicon Valley technologists who have learned the hard way that aspiring to do good and turning a profit don’t fit very well together. To paraphrase Evgeny Morozov, the private AI development model has it all backwards: investors fund technologies that are (bad) answers in search of social questions. While these digital technologies were not developed for and thus do not solve real social ills, they exacerbate existing problems and create plenty of new ones. To create technologies that are not imposed on society to enrich the few, but that instead are designed to address pressing social needs or to advace public economic goals, requires a rather different way of organizing how AI is developed and applied. And developing a public research alternative for AI is a first step in the right direction.

Tellingly, however, the downfall of Open AI’s (admittedly flawed) public vision may also be NAIRR’s: access to the compute resources essential to training AI models is expensive. The NAIRR task force report notes that the federal system will not be centrally operated and/or publicly owned—instead, services will be provided by “partner resource providers.” The report allows for—even at times seems to endorse—non-commercial resource providers where possible. But given the high costs involved with building and operating computing resources, it is almost certain that NAIRR’s operators will need to contract with one or more of Microsoft (Azure), Amazon (Amazon Web Services) or Google (Google Cloud), who together make up 65% of the cloud computing market, to obtain the computing capacity needed to satisfy NAIRR’s goals.

This may seem a small detail, but access to compute is the fateful step that led Open AI to partner with Microsoft, much of whose $10 billion investment will be in the form of computing time on Azure. Without the funding and the commitment to develop the actual material infrastructure needed to support a truly public pipeline for AI research and development, NAIRR risks becoming dependent on the same concentrated forms of private power that underpin private AI development, and continuing an industrial policy designed around public private partnership that does too little to build true public AI capacity and capability.

Ifeoma Ajunwa

President Biden’s Executive Order on AI does not adequately safeguard workers’ rights and fails to highlight the importance of AI-related education in the legal profession.

Regarding worker’s rights, the Order sets forth a promising mission: “In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.” However, in pursuit of that mission, the executive bill does not specify roles for the Equal Employment Opportunity Commission (EEOC) or for the National Labor Relations Board (NLRB). The Order does not empower those agencies with the legal teeth necessary for effective AI governance.

What should the Order have directed with respect to worker rights? The EEOC should be directed to create and disseminate a worker’s bill of rights that delineates the limits of AI-enabled worker surveillance. Furthermore, given that AI is increasingly becoming a standard part of hiring practice – the EEOC should be directed to set up an audit system for automated hiring systems. Additionally, the NLRB should be empowered to enforce collective bargaining by worker unions to set parameters for how AI technologies will be used in all parts of the employment experience, from hiring, to evaluation, and to firing. The Administration should also develop a framework for compensating the soon to be displaced workers whose “captured capital” is being used to automate their workplaces.

Finally, looking beyond the American workplace, the administration must recognize that the lack of extraterritoriality of labor laws mean that AI front-line workers in places like Kenya do not enjoy the same labor protections for a fair wage or mental health assistance as their American counterparts. The Biden administration should obtain global fair wage and workplace safety agreements from tech companies to ensure that the pursuit of AI innovation does not become an opportunity to exploit workers in still developing countries.

Turning to AI-related education, the Order recognizes its importance but does not go far enough in instituting measures to ensure adequate AI-expertise as an indispensable part of AI governance. Given the importance of this nascent technology to a wide variety of industries and, increasingly, everyday life, the federal government should direct NSF to provide funds for implementing AI-education as an integral part of all major professional education, including, and especially, law. The lack of lawmaker expertise on AI issues during several congressional hearings speaks to the urgency of this issue. It is also important that underrepresented populations are enabled to pursue AI-related education. This will help to address the disproportionate adverse impacts on minorities seen in AI for hiring, AI in medicine, AI in the financial system, AI in education, etc. that are the proximate result of AI developer blindspots.

Nikolas Guggenberger

Regulators and legal scholars have recently expressed concerns that the technological characteristics of artificial intelligence’s (AI), as well as proposed safety regulations, may undermine fair competition and result in a highly concentrated AI market. By contrast, with few notable exceptions, little attention has been paid to the inverse relationship—that the form of industrial organization we pursue will define the risks of AI and possibilities for mitigation.

Some risks are especially prone to materialize in concentrated markets: regulatory capture and revolving doors between regulators and big technology companies, systemic fragility, exclusion and discrimination, stagnation, loss of artistic and cultural diversity, undue political influence, and rent extraction, to name just a few. Pluralistic and open arrangements, on the other hand, may exacerbate other types of risks. The proliferation of dangerous technologies, quality-deteriorating competitive races to the bottom, and, arguably, the dissemination of harmful speech come to mind. No doubt, in both concentrated structures and pluralistic markets, regulation and supervision can mitigate these risks significantly. But here again, the effectiveness of regulation and supervision will depend in part on the industry’s structural organization.

The form of industrial organization we pursue therefore depends on which kind of AI risks we think are most salient. If AI development posed existential risk, like nuclear or gain-of-function research, for example, we should aim to limit access to the technology, prevent experimentation with powerful models, and keep dangerous know-how locked up. To achieve that, the government would need to centralize capacities and bring them under strict regulatory or direct government control. Strict liability rules, oversight, and licensing regimes alone could not provide sufficient safeguards. Pluralism would, by definition, be antithetical to safety.

If, however, AI tools lacked catastrophic potential (for the foreseeable future, at least), but they were likely to exacerbate abuses of power or were prone to systemic fragility, we should insist on open and pluralistic arrangements. The best regulation and oversight could not, by itself, provide sufficient safeguards where power remains centralized and monoculture invites systemic failures. Rather, power would need to be distributed and design choices would need to be decentralized. Under these circumstances, concentration would, by definition, be antithetical to safety.

Overall, the Administration’s prioritization of present and foreseeable risks and its emphasis on open and pluralistic arrangements are commendable. However, maintaining the dynamism of this approach is imperative. As AI technology evolves and new risks emerge, government may need to reassess the principles for the AI sector’s industrial organization. Recognizing and continuously adapting to the intricate interplay between technological risks and industrial organization is a crucial aspect of ensuring the safe, secure, and trustworthy development and use of artificial intelligence.

Elettra Bietti

President Biden’s new Executive Order on AI, with its patchwork of proposals, might be criticized as lacking a coherent direction or vision for AI policy. More charitably, however, the Order represents a surprising instance of experimentalism in a technological context that is quickly changing and whose social implications we have yet to grasp. It is a pragmatic attempt at laying out many fronts of action, goals, and potential paths forward all at once, in the hope that some of these proposals might avert large-scale catastrophe and that some coherence will emerge over time.

The Order’s framing around “competition” and “innovation” illustrates both the advantages and drawbacks of this experimentalist approach.

On the one hand, the Order flirts with a libertarian, bottom-up vision of AI governance. To date, companies and other stakeholders have steadily pushed the notion that, to avoid stifling innovation, AI should be governed by private companies alone, through self-regulatory efforts such as embedding ethicists in firms or encoding technical “fairness” standards in ML systems. The Biden Order does not depart from the belief that self-regulation and technical tools can promote innovation while averting threats. The emphasis on cryptographic tools and privacy-enhancing technologies (PETs), for example, shows that the government hopes to facilitate market-based and technical efforts even in a legally impoverished landscape devoid of privacy and data governance reforms.

At the same time, the Order’s industrial policy bent unambiguously advocates more active government involvement to ensure the US economy’s competitiveness at a national and global level. The Order’s concern with “dominant firms” and with “concentrated control of key inputs,” such as semiconductors, data, and cloud services, denotes an appetite for bold and pre-emptive antitrust enforcement in this nascent, yet already densely controlled, industry. For example, the Federal Trade Commission is specifically encouraged to use its rulemaking authority to promote competition in the AI space. The Order also supports government-led efforts to fund startups and small business entrants, welcomes qualified immigrant labor, and advocates for the US’s position as a global leader in a variety of AI business segments including semiconductors.

At this early stage in the development of US AI policy, the Order’s lack of a coherent vision around “competition” and “innovation”—its simultaneous support for permissionless innovation and for strong government intervention—is both a blessing and a curse. It presents policymakers and regulators with difficult interpretive puzzles, but also creates opportunities to promote a variety of approaches and efforts side-by-side, even when they appear to conflict. In a space that remains in flux, and where incumbents are gaining ground, one hope is that, thanks to this Order, the deregulatory vision long advocated by industry stakeholders will start giving way to more robust conceptions of the government’s role in ensuring a just, dynamic and plural information economy.

Jason Jackson and JS Tan

Much of the Biden Administration’s recent AI executive order is framed as regulatory protections for consumers and workers: ensuring user safety, bolstering data privacy, minimizing bias, and mitigating harm. But the order could also be read as industrial policy, aiming to catalyze AI research in “vital areas like healthcare and climate change,” promote a competitive AI market, and expand the supply of skilled workers. Indeed, in addition to regulating AI, the EO also has developmental functions not unlike the more explicitly industrial policy oriented IRA and the CHIPS Act.

AI regulation is not just an American issue. To assess the likely effectiveness of the EO, both as advancing industry and ensuring safety and equity, it is useful to compare it to ongoing efforts in other country contexts. Governments recognize the imperative of regulating AI given the range of known and unknown harms. Yet they are also in technological competition with one another to be leaders in AI development. As such, these two goals are likely to come into conflict, as promoting AI can come at the expense of effectively regulating it.

China, for instance, is often cited as an example of proactive state governance, and the government has indeed enacted regulations around recommendation algorithms and synthetically generated media. However, the 2017 New Generation AI Development Plan reveals the core motivation: promoting Chinese firms to global AI leadership and boosting China’s economy. Yet since then, algorithmically generated social media feeds have been perceived as a threat to the CCP’s ability to control the public discourse, leading to policies to control information flow and ensure that AI-generated content reflects “core socialist values.” China’s AI policy is thus precariously balanced between ambitions of technological leadership and maintaining social and political control. 

Perhaps even more so than China, the Indian state has taken the lead in the development and deployment of AI at scale through a form of “goventrepreneurism” with the private sector in tow. The Modi-led BJP aims to create a “digitally empowered society and knowledge economy,” promising “minimum government, maximum governance” through “Digital India” programs like Aadhaar, the biometric database that enables state-based social service provision and the India stack, an infrastructure of APIs built on top of Aadhar that enables corporations to sell digital consumer services to the now digitally-captured Indian market. The BJP now seeks to harness the power of AI to fuel economic growth. However, it is unclear if this will succeed in transforming Indian industry. Yet as a form of new public management, Digital India is manifesting as a powerful program of centralization and a tool of surveillance and discipline, recalling China’s social credit scoring system or indeed, post-9/11 surveillance programs under the US Department of Homeland Security.

By contrast, the European Union epitomizes progress in digital governance. GDPR created world-leading consumer protection standards for millions inside and outside of Europe (despite ambiguities that enable favorable interpretation by tech firms and have threatened effective enforcement). EU policy, including the recently proposed Artificial Intelligence Act (2023), may ensure continued leadership in tech regulation but does less to ensure Europe is not a laggard in tech innovation. At best, it concerns itself with the rules of economic competition, but French startups and officials have articulated the precise concerns we have raised here that EU regulation may not only do little by way of intervening in the substantive matters of promoting innovation, it may actually impede it. As President Macron recently stated: “Regulation must be controlled, not punitive, to preserve innovation.”

This brief comparative cross-national perspective highlights how developmental interests and political regimes shape AI governance. With the new EO, the US seems to fall somewhere in between China, India, and Europe, with a stated commitment to mitigate AI harms but also a clear imperative to stay ahead of China in AI development. There is good reason to worry, however, that American tech executives will play up this geopolitical rivalry to weaken regulatory efforts, as they have in other domains in the past. Jeff Bezos, for instance, has suggested that the US might face consequences if the tech industry disengaged from Pentagon projects. Mark Zuckerberg, meanwhile, similarly claimed that regulating its digital currency “could jeopardize American democratic values and global leadership,” which Meta presumably embodies. Private corporate interests’ invocation of national competitiveness could thus curb the US’s commitment and capability to effectively regulate AI. 


Bonus Reaction: ChatGPT

To conclude our rapid roundtable, we asked ChatGPT to tell us which aspect of the executive order was the most interesting. However, the answer it generated was quite boring, so we asked it to re-write the response in the style of Tony Soprano:

“Hey, listen up. You see this executive order thing from President Biden on AI? It’s like he’s telling everyone, ‘We’re not just gonna play around with fancy tech; we’re gonna do it the right way, capisce?’ What stands out the most is this strong push for responsible and ethical AI. It’s like he’s saying, ‘We’re not gonna let these machines run wild, screwing with people’s privacy and messing things up.’

It’s kinda like the boss telling his crew, ‘We operate within the rules. We don’t want any funny business, no biases, and everyone’s gotta be held accountable for their actions.’ This ain’t just about tech; it’s about principles, values, and making sure the people come first.

Biden’s saying, ‘We’re gonna be leaders in AI, but we’re gonna do it the right way. No shortcuts, no shady stuff.’ It’s like running a family business – you wanna be on top, but you gotta have respect, you gotta play fair. This focus on ethical AI, it’s like the boss setting the tone for the whole operation. It’s not just about the technology; it’s about how you use it, and making sure it’s in line with what we stand for.”

As the actual Tony Soprano once said, “All this from a slice of gabagool?”