Skip to content

Automation of Labor, Labor of Automation

PUBLISHED

Frank Pasquale (@FrankPasquale) is Professor of Law at Brooklyn Law School, and author of New Laws of Robotics (2020) and The Black Box Society (2015).

Sandeep Vaheesan (@sandeepvaheesan) is the legal director at the Open Markets Institute, and author of Democracy in Power: A History of Electrification in the United States.

In this post, Sandeep Vaheesan interviews Frank Pasquale about his forthcoming book, New Laws of Robotics: Defending Human Expertise in the Age of AI.

Sandeep Vaheesan: Your book is powerful, deep, and a pleasure to read. It will be a major contribution to raging debates over democracy, the power of the tech sector, and what constitutes utopia (or dystopia). The book strikes at trans-ideological myths that have driven the discourse and shaped policy for the past few decades. What is the genesis of this work?

Frank Pasquale: Thanks very much for those kind words, Sandeep. I appreciate your focus on myths, because one of the main purposes of a book like this is to vindicate some common sense about the value of labor, in the face of elaborately mathematicised economics that has lost sight of the purpose of economic growth.

My central argument is that AI and robotics most often complement, rather than replace, human labor, and that in many areas, we should maintain this status quo. Our technology policy should ensure that AI increases wages for most workers, rather than supplanting them.

My central argument is that AI and robotics most often complement, rather than replace, human labor, and that in many areas, we should maintain this status quo. Our technology policy should ensure that AI increases wages for most workers, rather than supplanting them.

Those goals are not terribly controversial—governments and political parties around the world claim to advance them. What politicians have ever said they want to cut jobs? But they do tend to favor capital over labor, and stock markets love layoffs. Rentiers can invest in politicians just as they do in stocks and bonds. And when they do so, a key concern is short-term profits, rather than longer-term investment in a more sustainable and inclusive economy.

However, this isn’t just a problem of corruption and improper influence. The economics of automation needs an overhaul. The leading figures in the field rarely make normative judgments about what the economy ought to accomplish or provide. They are so caught up in abstractions, they can’t grapple with the climate or COVID or inequality catastrophes in front of their faces.

I recall hearing one economist talking about how persons that make under $24 an hour are very likely to be replaced by robots, and those making above $48 an hour are very unlikely to be automated out of their jobs. The lesson, according to President Obama’s Chair of Economic Advisors, was that education and job training are key. I don’t disagree on that—New Laws of Robotics promotes both universal college education and targeted training opportunities. But I go much further. I don’t presume that all the well-paid jobs are all that socially useful. Rather, I propose ways to channel investment in AI and robotics toward human services that actually meet important needs, and away from a broad class of tech devoted to ranking, rating, sorting, and punishing people. I think that much of that AI—ranging from police robots to credit scores based on social media profiles to affective computing–is vastly overrated and creepy. (To paraphrase Ryan Calo: there are problems both if it fails to work, and if it works too well!)  So New Laws of Robotics enthusiastically embraces the industrial policy that mainstream economics primly sidesteps to maintain a façade of neutrality.

Sandeep Vaheesan: An important theme of your book is that technology is not an autonomous force. Too often, technology is presented as existing apart from society and periodically shaking up, or “disrupting,” supposedly problematic arrangements—think of the earlier narrative on Uber and the traditional cab industry. You do an excellent job discrediting this view and explaining that the direction and the pace of technological change are a product of human decision-making. How do you see technological change happening at present and what would a more democratic model be?

Frank Pasquale: At present, the money goes where the perceived market is. So at the margin, if a VC has to decide on whether to invest in AI that, say, improves radiological exams (possibly catching cancer or other abnormalities earlier), or AI that boosts “employee engagement” by tracking eye movements—well, it depends on what can bring in massive profits faster. And there are some structural features that make health care the worse bet. Privacy laws are stricter in health care, so it may be harder to get the data. For a long time, health care providers have been pretty fragmented, so the radiology AI salesperson might need to visit hundreds of different offices to get buy-in. The technology needs to be validated, often with rigorous testing. And most importantly, insurers (both public and private) may not want to pay for the new technology, and thus may make it difficult for conscientious, forward-thinking radiologists to implement it well. In much of the private sector, the exact opposite is the case: there is no insurer judging the transaction, concentration is rampant, validation is an afterthought, and privacy laws are both weaker than HIPAA and less well-enforced.

Now, one vision for accelerating AI in health care is to get rid of features I just mentioned: suspend HIPAA and human subjects research protections, while vertically integrating care teams into massive corporations (perhaps run by billionaires). Maybe an Amazon hospital network can hire Amazon doctors, while an Amazon-Uber partnership takes over ambulance service and an Amazon pharmacy negotiates to reduce drug prices. Advances in AI could be tucked into that behemoth. Top managers in Seattle could gather data across hundreds of institutions, ranking and rating physicians based on patient outcomes. To the extent the Trump Administration has an AI policy, it’s that: shovel more money and power to the biggest American tech firms so they can keep up surveillance capitalism and merger sprees.

My book goes in a very different direction. I’d give professionals like doctors and nurses more autonomy to build a better health care system, while crafting incentives for them to take on more governance and research roles. I’d also encourage the professionalization of work that is now undervalued, like home health care. Would you rather see your parents or grandparents “cared for” by a robot (even cute ones, like the Paro robotic seal, or RoBear), or would you rather such technology be introduced to them by caring and well-trained professionals, who can spark conversations about the robot?

Would you rather see your parents or grandparents “cared for” by a robot (even cute ones, like the Paro robotic seal, or RoBear), or would you rather such technology be introduced to them by caring and well-trained professionals, who can spark conversations about the robot?

Sandeep Vaheesan: Your case for professionalism departs from the bipartisan consensus of our times. Centrists and libertarians alike have condemned professions as “anticompetitive guilds” and called for weakening their power. While this may have a kernel of truth for law, medicine, and other fields in which professionals can earn substantial incomes, the critique has been leveled against florists, hair stylists, and others who typically do not make six figures. What are the economic and political arguments for not only maintaining but expanding professionalism in American society? What are some advantages of professional market governance relative to corporate market governance?

Frank Pasquale: One reason I wrote the book was a sense that professionals were getting a bad rap from all sides: left, right, and centrist. The right abhors worker interference in “labor markets,” while promoting capital’s autonomy to maximize its own coordination power. The more anarchist strands of the left challenge epistemic authority as a mere assertion of power. And attacks on occupational licensing are catnip for centrists, who can convene left and right to erode the hard-won rights of professionals in order to “level the playing for labor”—downward, of course.

For this book, I kept asking: what would it look like to level the labor playing field up? How could a nurse enjoy something like the security of position that professors have? How could professors take a larger role in governing their universities, in the way nurses’ unions have successfully done in many areas? What would it look like for workers in logistics, cleaning, agriculture, and mining to unionize and professionalize, demanding a greater say in how technology was adopted in their fields?

If you take any of these questions individually, they may smack of unjustifiable privilege. And if a teacher were to simply say “please don’t develop educational AI so I can keep earning a salary,” that’s not a good argument for stopping it (just as horse-and-buggy drivers should not have been allowed to stop the development of cars).

However, as I explain in detail in my chapter on education, there is much more that teachers can say to justify their role. No robot or AI is anywhere near the competence of the median teacher. Even if they do advance in important ways, I still foresee teachers standing between students and educational AI the way a doctor stands between patients and prescription drugs. Teachers both help produce, and gather, data on which educational AI (such as chatbot teaching assistants) depends. TeachersSandeep Vaheesan interviews Frank Pasquale about his forthcoming book, New Laws of Robotics: Defending Human Expertise in the Age of AI. actually know students personally, while a distant “edtech vendor” may see them simply as revenue streams. If a test is badly graded by an AI, a teacher can stand as an immediately accessible “person-in-the-loop,” to review it. That is distributed, in-person, democratized expertise—something that dominant, autocratic corporate forms cannot replicate.

Sandeep Vaheesan: You call for a much thicker democracy than what we have today. What are features of a democratic society you envision that we lack at present and arguably never had?

Frank Pasquale: For me, the key issue is the democratization of work. Elizabeth Anderson is the go-to theorist here—her brilliant book Private Government unveils the extreme hierarchy of most workplaces. The big question for AI and robotics in the workplace now is whether management recruits it to engage in limitless workplace surveillance (and control); or whether workers get some say in automation proceeds, how its outcomes are judges, and how workplace data is governed.