Skip to content

Data Nationalization in the Shadow of Social Credit Systems

PUBLISHED

Frank Pasquale (@FrankPasquale) is Professor of Law at Brooklyn Law School, and author of New Laws of Robotics (2020) and The Black Box Society (2015).

This post is part of a symposium on the Political Economy of Technology Read the rest of the posts here.

The political economy of digitization is a fraught topic. Scholars and policymakers have disputed the relative merits of centralization and decentralization. Do we want to encourage massive firms to become even bigger, so they can accelerate AI via increasingly comprehensive data collection, analysis, and use? Or do we want to trust-bust the digital economy, encouraging competitors to develop algorithms that can “learn” more from less data? I recently wrote on this tension, exploring the pro’s and con’s of each approach.

However, there are some ways out of the dilemma. Imagine if we could require large firms to license data to potential competitors in both the public and private sectors. That may sound like a privacy nightmare. But anonymization could allay some of these concerns, as it has in the health care context. Moreover, the first areas opened up to such mandated sharing may not even be personal data. Sharing the world’s best mapping data beyond the Googleplex could unleash innovation in logistics, real estate, and transport. Some activists have pushed to characterize Google’s trove of digitized books as an essential facility, which it would be required to license at fair, reasonable, and non-discriminatory (FRAND) rates to other firms aspiring to categorize, sell, and learn from books. Fair use doctrine could provide another approach here, as Amanda Levendowski argues.

In a recent issue of Logic, Ben Tarnoff has gone beyond the essential facilities argument to make a case for nationalization. Tarnoff believes that nationalized data banks would allow companies (and nonprofits) to “continue to extract and refine data—under democratically determined rules—but with the crucial distinction that they are doing so on our behalf, and for our benefit.” He analogizes such data to natural resources, like minerals and oil. Just as the Norwegian sovereign wealth fund and Alaska Permanent Fund socialize the benefits of oil and gas, public ownership and provision of data could promote more equitable sharing of the plenitude that digitization ought to enable.

Many scholars have interrogated the data/oil comparison. They usually focus on the externalities of oil use, such as air and water pollution and climate change. There are also downsides to data’s concentration and subsequent dissemination. Democratic control will not guarantee privacy protections. Even when directly personally identifiable information is removed from databases, anonymization can sometimes be reversed. Both governments and corporations will be tempted to engage in “modulation”—what Cohen describes as a pervasive form of influence on the beliefs and behaviors of citizens. Such modulation is designed to “produce a particular kind of subject[:] tractable, predictable citizen-consumers whose preferred modes of self-determination play out along predictable and profit-generating trajectories.” Tarnoff acknowledges this dark possibility, and I’d like to dig a bit deeper to explore how it could be mitigated.

Reputational Economies of Social Credit and Debt

Modulation can play out in authoritarian, market, and paternalistic modes. In its mildest form, such modulation relies on nudges plausibly based on the nudged person’s own goals and aspirations—a “libertarian paternalism” aimed at making good choices easier. In market mode, the highest bidder for some set of persons’ attention enjoys the chance to influence them. Each of these are problematic, as I have noted in articles and a book. However, I think that authoritarian modulation is the biggest worry we face as we contemplate the centralization of data in repositories owned by (or accessible to) governments. China appears to be experimenting with such a system, and provides some excellent examples of what data centralizers should constitutionally prohibit as they develop the data gathering power of the state.

The Chinese social credit system (SCS) is one of the most ambitious systems of social control ever proposed. Jay Stanley, a senior policy analyst at the ACLU’s Speech, Privacy & Technology Project, has summarized a series of disturbing news stories on China’s “Planning Outline for the Construction of a Social Credit System.” As Stanley observes, “Among the things that will hurt a citizen’s score are posting political opinions without prior permission, or posting information that the regime does not like.” At least one potential version of the system would also be based on peer scoring. That is, if an activist criticized the government or otherwise deviated from prescribed behavior, not only would her score go down, but her family and friends’ scores would also decline. This algorithmic contagion bears an uncomfortable resemblance to theories of collective punishment.

Admittedly, at least one scholar has characterized the SCS as less fearsome: more “an ecosystem of initiatives broadly sharing a similar underlying logic, than a fully unified and integrated machine for social control.” However, heavy-handed application of no-travel and no-hotel lists in China do not inspire much confidence. There is no appeal mechanism—a basic aspect of due process in any scored society.

The SCS’s stated aim is to enable the “trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.” But the system is not even succeeding on its own terms in many contexts. Message boards indicate that some citizens are gaming the SCS’s data feeds. For example, a bank may send in false information to blackball its best customer, in order to keep that customer from seeking better terms at competing banks. To the extent the system is a black box, there is no way for the victim to find out about the defamation.

This basic concern about data quality and integrity gives the lie to arguments that “Chinese AI companies, almost wholly unfettered by privacy concerns, will have a raw competitive edge when it comes to exploiting data.” If guarantees of due process are limited or non-existent, how strong can promises of data quality and integrity be?  Moreover, the system cannot be legitimate if it imposes “discrediting” punishments grossly disproportionate to the “crimes” (or, more evocatively, “sins”) the system identifies. “How the person is restricted in terms of public services or business opportunities should be in accordance with how and to what extent he or she lost his credibility,” stated Zhi Zhenfeng, a legal expert at the Chinese Academy of Social Sciences in Beijing. This is a maxim that not only the Chinese, but also the US, government needs to consider as each develops opaque systems for stigmatizing individuals.

Raising the Costs of Algorithmic Governance of Persons

I am opposed to SCS’s in general. Following Deleuze’s critique of “control societies,” I believe that algorithmic governance is prone to a tyrannical granularity, a spectrum of control more exacting and intrusive than older forms of social order. (For example, one “intelligent classroom behavior management system” is set to scan classrooms every 30 seconds and record “students’ facial expressions, categorizing them into happy, angry, fearful, confused, or upset…[as well as recording] student actions such as writing, reading, raising a hand, and sleeping at a desk.”). I also fear that some contemporary movements for “algorithmic accountability” are prone to being coopted by the very corporate and governmental entities they are ostensibly constraining.

However, I think that initiatives for data protection and for more equitable data availability, including nationalization, can be mutually reinforcing. Data protection rules like the GDPR effectively raise the cost of surveillance and algorithmic processing of people. They help re-channel technologies of algorithmic governance toward managing the natural world, rather than managing people. Better management and innovation in sectors like clean energy, transport, and agriculture is more likely to promote prosperity than the zero-sum person-ranking games of platform capitalism and SCS’s.

We should also be very cautious about acceding to corporate and governmental demands for more access to data. The relaxation of privacy laws, if it is to be done at all, should provide a point of leverage for civil society to make certain basic demands on the future direction of SCS’s, AI development, and other initiatives. The process of data-gathering itself should be respected as skilled and professional labor, and compensated accordingly. “Fairness by design,” “nondiscrimination by design,” “transparency by design,” “due process by design,” and more, must become guiding principles for the collection, analysis, and use of data.

Finally, we must also acknowledge that, sometimes, it may be impossible to “enlist technology in the service of values at all.” A continuous child-face-scanning system in schools, no matter how humanely administered, is oppressive. Nor are efforts to recognize the quintessential facial structure of criminals a project that can be humanized with proper legal values and human rights. Sometimes the best move in a game is not to play.