The United States is home to overlapping crises of social, economic, and political inequality. As debates about how to promote a more egalitarian society have become increasingly salient, one approach that has gained traction is to inform socially consequential policy decisions using algorithms—in particular, machine learning systems that infer patterns from historical data to make predictions about future outcomes. To scholars across a range of fields, algorithms represent a novel approach to overcoming the cognitive limits and social biases of human decision-makers. Proponents describe how algorithms could “disparate[ly] benefit” historically disadvantaged groups, which are typically judged unfavorably by stereotype-prone human decision-makers. Policymakers and advocates praise algorithms as being able to enhance equality by replacing biased human decisions with “objective” data-driven ones.
Arguments in favor of algorithms run deeper than merely pointing to particular abilities of algorithms, however. Proponents increasingly advocate for adopting algorithmic logic as a particularly useful mode of reasoning for making sense of complex policy decisions. In this view, the mathematical formalism of algorithms provides a reality check by offering “clarity” on the difficult tradeoffs between goals that might otherwise remain murky and unarticulated. Proponents of this view argue that algorithms can “be a positive force for social justice” because they “let us precisely quantify tradeoffs among society’s different goals” and “force us to make more explicit judgments about underlying principles.”
This general optimism about algorithms by prominent proponents has led to their application in numerous high-stakes decision-making processes such as pretrial adjudication, child welfare, and unemployment. As algorithms become increasingly central in efforts to reform law and policy, algorithmic reasoning has become one hallmark of what it means to conceive of such problems rigorously.
Thus, as law and political economy scholars take aim at the deficiencies of dominant modes of legal thought and chart a path for law to promote a more just and egalitarian society, they must also attend to the role of algorithmic systems and algorithmic thought in shaping political imaginations. By the same token, computer and information scientists interested in computation’s role in social reforms would do well to learn from the critiques and proposals of the LPE community.
Algorithmic reasoning has become an increasingly prominent mode of theorizing about questions of policy and equality, typically under the guise of technical discussions about computational systems. Because, as John Dewey noted, “[t]he way in which [a] problem is conceived decides what specific suggestions are entertained and which are dismissed,” how we reason about social challenges shapes how we respond to them. It is therefore essential to consider whether algorithmic reasoning—especially particular manifestations such as “algorithmic fairness”—provides the appropriate conceptual and practical tools for reforming public policy and enhancing equality. Without the capacity to represent “normatively relevant political facts,” algorithmic methods will fail to comprehend injustices and to guide effective paths for remediating those injustices. We must therefore ask: What types of insights does algorithmic reasoning highlight and obscure?
Algorithmic reasoning suffers from many similar deficiencies as legal thought in the era of the “Twentieth-Century Synthesis,” which rendered questions of political economy, power, and structural inequality invisible or irrelevant. The dominant mode of algorithmic reasoning adheres to a logic of “algorithmic formalism,” which involves three key orientations: objectivity/neutrality, internalism, and universalism. Although often reasonable for traditional algorithmic work, these orientations produce a chronic tunnel vision that can lead algorithmic policy interventions to entrench unjust social conditions, narrow the range of possible reforms, and impose algorithmic logics at the expense of others. In turn, efforts to achieve algorithmic fairness rely on a restrictive frame of analysis and consider only the explicit functioning of an algorithm when rendering decisions, treating the social context as static or irrelevant. As a result, even when tools such as pretrial risk assessments satisfy central notions of algorithmic fairness, they can suffer from many of the same issues that critical legal scholars noted in their critique of rights: in particular, risk assessments have indeterminate impacts, rely on individualistic notions of reform, and ultimately serve to legitimize oppressive policies and institutions.
These deficits in algorithmic reasoning produce efforts at algorithmic reform which are simultaneously too ambitious and not ambitious enough. On the one hand, algorithmic interventions are remarkably bold: algorithms are expected to solve social problems that couldn’t possibly be solved by algorithms. Naïve (and corporate-driven) utopianism is a hallmark of technological and algorithmic thinking. On the other hand, algorithmic interventions are remarkably timid and display a notable lack of social or political imagination: such efforts rarely take aim at broad policies or structural inequalities, instead opting merely to alter the precise mechanisms by which certain existing decisions are made. These two seemingly contradictory flaws have a common source: the narrow algorithmic frame of analysis that fails to see beyond the context of isolated decision-making procedures and the elements of those decisions that are rendered legible in formal, mathematical terms. This constrictive frame prevents rigorous thought regarding the possibilities and limits of socio-technical reform.
What would be required to shift the terms of algorithmic reasoning and interventions so that they play a more productive role in creating a democratic and egalitarian future? To riff on the recent LPE conference plenary on Radical Legal Imaginaries: What might be the tenets of a radical algorithmic imaginary, and how might we bring about such a praxis?
First and foremost, we need a theory of social change that can guide the work of algorithmic policy interventions. Such a theory must be capable of connecting visions for radical reforms with the types of incremental, often piecemeal, interventions provided by algorithms. Particularly within the computer science community, the incremental nature of most algorithmic reforms is typically seen as conflicting with goals for radical reorientations of society—essentially, “don’t let the perfect be the enemy of the good.” Yet, as many social thinkers have articulated, this is a false dichotomy. As Erik Olin Wright describes in his treatise on “real utopias,” we need utopian visions and pragmatic reforms to mutually inform one another in order to produce “utopian destinations that have accessible waystations.” André Gorz’s notion of “reformist” and “non-reformist” reforms provides another useful frame for distinguishing between types (or strategies)of incremental reform, as Amna Akbar has recently explicated further in discussing a democratic political economy. Similarly, police and prison abolition efforts connect calls for abolition with “a framework of gradual decarceration.” These theories of reform can help computer scientists and others better reason about the relationship between algorithmic reforms and long-term visions of radical change.
We also need new approaches to algorithmic reasoning and design that can productively interface with such theories of social change. In recent work, drawing on the lessons of legal realism, Salomé Viljoen and I have proposed an approach of “algorithmic realism” that aims to help algorithm developers account for the realities of social life and the impacts of algorithms. The shift from algorithmic formalism to algorithmic realism embodies three overlapping reorientations: from objectivity/neutrality to a reflexive political consciousness, from internalist logic to a porous approach that recognizes the complexity and fluidity of the social world, and from universalism to an embrace of contextualism. A central component of algorithmic realism is “agnosticism,” which entails approaching algorithms instrumentally, considering their role in larger reform efforts without prioritizing any necessary role for algorithms (in contrast to the “solutionism” that so often pervades efforts to address social challenges with technology). Similarly, we must pursue substantive forms of algorithmic fairness that provide a more expansive analysis of social hierarchies. Such an approach can shift reforms beyond efforts to algorithmically instantiate fairness at a single decision point and toward broader agendas to reform social relationships and decision-making structures, both with and without algorithms.
Numerous other reforms will be necessary to make new approaches to algorithms and social change possible, of which I will name just a few. First is an adapted training process within computer science and related fields that places greater emphasis on issues of power, democracy, and equality. Akin to law, algorithmic pedagogy typically revolves around a narrow set of concerns (associated with efficiency and formalized reasoning) and involves a pipeline into careers working for major tech companies. Some progress has been made in adapting computer science curricula in recent years, but such efforts still often sideline many of the central political questions raised by algorithms.
Second, greater civic power over the uses and governance of algorithms is necessary to ensure democratic development and oversight of technology. Recent bans on facial recognition algorithms represent one manifestation of the role that communities can play in curbing some of the worst harms of algorithmic reformism. Activist groups such as Data for Black Lives, the Stop LAPD Spying Coalition, and Our Data Bodies are involved in struggles against oppressive algorithms (particularly those used by law enforcement agencies) in various cities across the U.S.
Third, there is a deep need for expanded digital capacity and infrastructure for developing and overseeing algorithms in government. Despite the many risks and harms of algorithms, a high-capacity state could be bolstered by algorithms in important ways, particularly if combined with alternative modes of digital statecraft. As with many other forms of infrastructure, however, digital infrastructure has long been neglected, leaving governments reliant on broken-down systems and partnerships with the private sector, issues that have emerged as significant roadblocks to grappling with the coronavirus pandemic.
As the LPE community sets its gaze on the interactions between legal and economic reasoning, it must also recognize that algorithmic reasoning is becoming incredibly influential in shaping visions for understanding and reforming society. Efforts to achieve greater equality through algorithms raise central LPE questions about the relationship between epistemic and normative commitments and the harms of pursuing reform through deficient methods. Both historical and contemporary critical scholarship about legal reasoning thus have much to offer work on algorithms. At the same time, debates about algorithmic decision-making and algorithmic reforms can shed new light on tensions, incompatibilities, and contradictions that exist within legal decision-making. Developing radical imaginaries—legal, algorithmic, and otherwise—requires applying the lessons from these related systems of social ordering to inform our visions for reform within the increasingly complex socio-legal-technical environments at hand.