Skip to content

The Second Wave of Algorithmic Accountability

PUBLISHED

Frank Pasquale (@FrankPasquale) is Professor of Law at Brooklyn Law School, and author of New Laws of Robotics (2020) and The Black Box Society (2015).

Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles.

This policy work is just beginning, as experts translate academic research and activist demands into statutes and regulations. Lawmakers are proposing bills requiring basic standards of algorithmic transparency and auditing. We are starting down on a long road toward ensuring that AI-based hiring practices and financial underwriting are not used if they have a disparate impact on historically marginalized communities. And just as this “first wave” of algorithmic accountability research and activism has targeted existing systems, an emerging “second wave” of algorithmic accountability has begun to address more structural concerns. Both waves will be essential to ensure a fairer, and more genuinely emancipatory, political economy of technology.

Though at first besotted with computational evaluations of persons, even many members of the corporate and governmental establishment now acknowledge that data can be biased, inaccurate, or inappropriate. Academics have established conferences like “Fairness, Accountability, and Transparency in Machine Learning,” in order to create institutional forums for coders, lawyers, and social scientists to regularly interact in order to address social justice concerns. When businesses and governments announce plans to use AI, there are routine challenges and demands for audits. Some result in real policy change. For example, Australia’s Liberal government recently reversed some “robodebt” policies, finally caving to justified outrage at algorithmic dunning.

All these positive developments result from a “first wave” of algorithmic accountability advocacy and research (to borrow a periodization familiar from the history of feminism). These are vital actions, and need to continue indefinitely—there must be constant vigilance of AI in sociotechnical systems, which are all too often the unacknowledged legislators of our daily access to information, capital, and even dating. However, as Julia Powles and Helen Nissenbaum have warned, we cannot stop at this first wave. They pose the following questions:

Which systems really deserve to be built? Which problems most need to be tackled? Who is best placed to build them? And who decides? We need genuine accountability mechanisms, external to companies and accessible to populations. Any A.I. system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest.

While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.

For example, when it comes to facial recognition, first-wave researchers have demonstrated that all too many of these systems cannot identify minorities’ faces well. These researchers have tended to focus on making facial recognition more inclusive, ensuring that it will have a success rate as high for minorities as it is for the majority population. However, several second-wave researchers and advocates have asked: if these systems are often used for oppression or social stratification, should inclusion really be the goal? Isn’t it better to ban them, or at least ensure they are only licensed for socially productive uses?

The second-wave concerns become even more pronounced in the case of face-classifying AI, which has been touted as now (or soon) able to recognize sexual orientation, tendencies toward crime, and mortality risk from images of faces. It is not enough for the research community to compile reasons why “facial inference of tendencies toward criminality,” when extrapolated from a small or partial dataset, is unlikely to provide durable clues as to who is more likely to commit crimes than others. We should also ask whether such research should even be done at all.

We should expect this division between first- and second-wave concerns to inform discussions of AI and robotics in medicine as well. For some researchers who are developing mental health apps, the first-wave algorithmic accountability concerns will focus on whether a linguistic corpus of stimuli and responses adequately covers diverse communities with distinct accents and modes of self-presentation.

Second-wave critics of these apps may bring in a more law and political economy approach, questioning whether the apps are prematurely disrupting markets for (and the profession of) mental health care in order to accelerate the substitution of cheap (if limited) software for more expensive, expert, and empathetic professionals. These labor questions are already a staple of platform regulation. I predict they will spread to many areas of algorithmic accountability research, as critics explore who is benefiting from (and burdened by) data collection, analysis, and use.

And lastly (for this post), we may see a divide in finance regulation. Establishment voices have hailed fintech as a revolutionary way to include more individuals in the financial system. Given biases in credit scores based on “fringe” or “alternative” data (such as social media use), this establishment is relatively comfortable with some basic anti-bias interventions.

But we should also ask larger questions about when “financial inclusion” can be predatory, creepy (as in 24/7 surveillance), or subordinating (as in at least one Indian fintech app, which reduces the scores of those who are engaged in political activity). What happens when fintech enables a form of “perpetual debt?” Kevin P. Donovan and Emma Park have observed exactly this problem in Kenya:

Despite their small size, the loans come with a big cost—sometimes as much as 100 percent annualized. As one Nairobian told us, these apps “give you money gently, and then they come for your neck.” He is not alone in his assessment of “fintech,” the ballooning financial technology industry that provides loans through mobile apps. During our research, we heard these emergent regimes of indebtedness called “catastrophic,” a “crisis,” and a major “social problem.” Newspapers report that mobile lending underlays a wave of domestic disarray, violence, and even suicide.

As Abbye Atkinson has argued, we must reconsider the proper scope of “credit as social provision.” Sometimes it merely offers the cruel optimism of a Horatio Alger mirage (or, worse, the optimistic cruelty that is a psycho-political hallmark of late capitalism). Just as economic rationales can quickly become rationalizations, enthusiasm about “AI-driven” underwriting is prone to obscure troubling dynamics in finance. Indeed, mainstream economics and AI could confer a patina of legitimacy on broken social systems, if they are left unchallenged.

At present, the first and second waves of algorithmic accountability are largely complementary. First wavers have identified and corrected clear problems in AI, and have raised public awareness of its biases and limits. Second wavers have helped slow down AI and robotics deployment enough so that first wavers have more time and space to deploy constructive reforms. There may well be clashes in the future among those who want to mend, and those at least open to ending or limiting, the computational evaluation of persons. For example: those committed to reducing the error rates of facial recognition systems for minorities may want to add more minorities’ faces to such databases, while those who find facial recognition oppressive will resist that “reform” as yet another form of predatory inclusion. But for now, both waves of algorithmic accountability appear to me to share a common aim: making sociotechnical systems more responsive to marginalized communities.