After a long week of driving, Domingo was one trip away from reaching his 96th ride, at which point he’d receive a $100 bonus from Uber. Like other drivers, he received an untenably low pay per ride, so he relied upon bonuses, surges, and similar incentives to make ends meet. But none of this was predictable or consistent; access to these various inducements changed every week. Some weeks, Domingo wouldn’t be allocated a bonus deal at all, even though his friends would. But that particular week, he had received one, and he hustled to get the number of rides necessary, budgeting with the $100 in mind. It was going to be his grocery money.
It was ten o’clock, and he was in a popular area of Los Angeles. He texted his partner to tell her he would probably be home soon. But instead, he drove around for 45 minutes, waiting for the algorithm to give him another ride. The Uber app, he was sure, was moving past him, dolling out rides to people who weren’t as close to reaching their bonus, so that he’d be kept in the pool of available drivers for longer. Should he stick it out, or give up and call it a night? The situation was maddening.
Domingo’s experience that night was not simply a matter of bad luck or random chance. Rather, it was an engineered outcome built on technological developments that, over the past two decades, have ushered in extreme levels of workplace monitoring and surveillance across sectors. These developments have given rise to range of now well-known concerns: limitations on worker privacy and autonomy, the potential for society-level discrimination to seep into machine-learning systems, and a general lack of transparency and consent regarding workplace data collection and retention.
But for a growing number of low-income and racial minority workers in the United States, on-the-job data collection and algorithmic decision-making systems are having another profound yet overlooked impact: they are undermining the possibility of economic stability and mobility through work by transforming the basic terms of how workers are paid.
Rather than receiving a predictable hourly wage—or a salary—workers like Domingo and others laboring in the logistics sector have been earning under a new system in which their constantly fluctuating wages are closely tied to algorithmic labor management. Under these new remuneration schemes, workers are paid different wages—calculated using opaque and ever-changing formulas reflecting individual driver location, behavior, demand, supply, and other factors—for broadly similar work. While companies like Uber use dynamic pricing and incentive structures, companies like Amazon pay workers through algorithmically-determined “bonuses” and scorecards that influence driver behavior through digitalized surveillance and adjudication.
In a new article, I draw on a multi-year, first-of-its-kind ethnographic study of organizing on-demand workers to examine these dramatic changes in wage calculation, coordination, and distribution: the use of granular data to produce unpredictable, variable, and personalized pay. Rooted in worker on-the-job experiences, I construct a novel framework to understand the ascent of digitalized variable pay practices, or the transferal of price discrimination from the consumer to the labor context, what I identify as algorithmic wage discrimination. As a wage-setting technique, algorithmic wage discrimination encompasses not only digitalized payment for work completed, but critically, digitalized decisions to allocate work and judge worker behavior, which are significant determinants of firm control.
Though firms have relied upon performance-based variable pay for some time, my research in the on-demand ride hail industry suggests that algorithmic wage discrimination raises a new and distinctive set of concerns. In contrast to more traditional forms of variable pay like commissions, algorithmic wage discrimination arises from (and functions akin to) to the practice of consumer price discrimination, in which individual consumers are charged as much as a firm determines they are willing to pay.
As a labor management practice, algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps for as little as the system determines that they may be willing to accept. Given the information asymmetry between workers and the firm, companies can calculate the exact wage rates necessary to incentivize desired behaviors, while workers can only guess as to why they make what they do.
In addition to being rife with mistakes that are difficult or impossible for workers to ascertain and correct, algorithmic wage discrimination creates a labor market in which people who are doing the same work, with the same skill, for the same company, at the same time, may receive different hourly pay. Moreover, this personalized wage is determined through an obscure, complex system that makes it nearly impossible for workers to predict or understand their frequently declining compensation.
Across firms, both in the on-demand economy and in some cases, beyond, the opaque practices that constitute algorithmic wage discrimination raise central questions about the changing nature of work and its regulation under informational capitalism. Most centrally, what makes payment for labor today fair? How does algorithmic wage discrimination change and affect the everyday experience of work? And, considering these questions, how should the law intervene in this moment of rupture?
A Moral Economy of Work: Wage Fairness
Although the U.S.-based system of work is largely regulated through contracts with a strong deference to the “managerial prerogative,” two general restrictions with respect to wages have emerged from social and labor movements to address moral concerns of distributional injustices: minimum wage and overtime laws, which set a price floor for the purchase of labor (in relation to time), and prohibitions on discrimination in the terms, conditions, and privileges of employment, which require firms to provide “equal pay for equal work.” Both sets of wage laws can be understood as forming a core moral foundation for the regulation of most work in the U.S.
Even during the Lochner-era, when the U.S. Supreme Court was hostile to wage price laws, interpreting them as violations of the state’s police power and an intrusion on workers’ freedom to contract, the Court frequently upheld other wage-related legislation through the logic of calculative fairness. This logic, best embodied in McClean v. State of Arkansas and Knoxville Iron Company v. Samuel Harbison, underscored the importance of the wage-setting process: that industrial workers should be paid in ways that were fair in form and method. If a miner was to be paid by the quantity of coal that he mined, the mining company could not weigh the coal after running it through a screen. That is, a firm could not ascribe value to a workers’ labor by introducing a new, obscuring instrument to calculate wages.
Algorithmic wage discrimination represents a dramatic departure from these norms of fairness. Even setting aside concerns that algorithmically determined wages will fail to meet a minimum wage, this method of remuneration is unfair because it determines one’s wages through an entirely unpredictable and opaque means: the worker cannot know what the firm has algorithmically decided their labor is worth, and the technological form of calculation makes each person’s wage different, even if their work, in all other ways, is the same.
This opaque process of payment also contributes to a more familiar kind of wage discrimination. According to Uber’s own data, interpreted by their own researchers, women working for the company make roughly seven percent less than men. Their economists attribute the wage difference to, among other things, “the logic of compensating differentials”—that is, the mechanisms of surge pricing and variation in driver idle time. The authors of the study analogize the gender pay gap found among ride-hail drivers to that found among JD and MBA graduates, which studies have determined are due largely to individual preferences that correlate with gender, such as a preference to work fewer hours or to work at lowering pay jobs. However, unlike in the case of lawyers or MBAs, the pay differential between Uber drivers cannot be explained by women workers choosing to work fewer hours or even certain hours. Rather, the determinants that result in lower pay for women drivers are driven in large part by the structure of wage payment—by algorithmic wage discrimination—which compensates workers differently for driving in particular areas and at different speeds. This, according to Uber’s own data, results in gender pay discrimination.
Beyond undermining long-established norms of pay fairness, algorithmic wage discrimination also significantly changes the everyday experience of work. Not only do workers bemoan the lack of predictability and low pay, but they also object to feeling constantly tricked by the automated technologies, especially when they have come to rely upon a technique to earn and the system suddenly changes.
Over the course of my research, with wages for ride-hail drivers continuing to decline, I frequently heard drivers complain about the “casino culture” generated by on-demand work. Rather than describe the various ways in which they are algorithmically managed through the lens of “games,” as is often discussed in the literature, workers in my research talked about their work experience through the lens of “gambling.” For instance, here’s how Ben, a driver and organizer with Rideshare Drivers United, described his experience:
It’s like gambling! The house always wins…This is why they give tools and remove tools – so you accept every ride, even if it is costing you money. You always think you are going to hit the jackpot. If you get 2-3 of these good rides, those are the screenshots that people share in the months ahead. Those are the receipts they will show.
Ben was not alone in using this language. Many drivers described the obscure terms with which they earned through the lens of uncertainty and data manipulation. It’s “casino mechanics,” Nicole often said. Domingo, the longtime driver whose experience began this post, felt like over time, he was being tricked into working longer and longer, for less and less. As he saw it, Uber was not keeping its side of the bargain. He had worked hard to reach his quest and attain his $100 bonus, but he found that the algorithm was using that fact against him.
In dynamic interactions between a worker and the app, the machine—like a supervisor—is a powerful, personalized conduit of firm interest and control. But unlike a human boss, the machine’s one-sided opacity, inconsistencies, and cryptic designs create shared worker experiences of risk and limited agency.
Perhaps most insidiously, however, the manufactured uncertainties of algorithmic wage discrimination also generate hope—that a fare will offer a big payout or that next week’s “quest” guarantee will be higher than this week’s—that temporally defers or suspends the recognition that the “house always wins.” The cruelty of those temporary moments of optimism become clear once again when workers get their payout and subtract their costs.
Some of the harms mentioned above might be addressed through familiar if elusive solutions—making sure that drivers are recognized as employees, rather than contractors, or setting appropriate wage floors. Yet certain harms that arise from digitalized variable pay—the constant uncertainty and sense of manipulation—call for additional regulation.
Some organized groups of workers and labor advocates have turned their advocacy toward the data and algorithmic control that are invisible to them. Using data privacy laws, workers in both Europe and the U.S. are suing to make transparent the data and algorithms that determine their pay. Others are engaging in data transparency attempts through the counter-collection of data, accomplished through data cooperatives.
But, for reasons I discuss in the longer the article, addressing the extraordinary problems posed by algorithmic wage discrimination must go beyond longstanding transparency, consent, and ownership models. Instead, I invite scholars of work and data governance to think more expansively not just about the legal parameters of whether data collection is consensual, what happens to data after it is collected, and who owns the data, but also about the legal abolition of digital data extraction at work, or what I have called the “data abolition.”
Digitalized data extraction at work is neither an inevitable, nor, especially when analyzed through the lens of moral economy, a necessary instrument of labor management. Adopting this approach, I propose a ban on algorithmic wage discrimination practices, which in turn, may disincentivize or even eliminate the collection and use of certain forms of data collection and digital surveillance at work that has long troubled privacy, work, and equality law scholars.