The Algorithmic Capture of Employment and the Tertius Bifrons


Ifeoma Ajunwa (@iajunwa) is an Associate Professor of Labor and Employment Law in the Law, Labor Relations, and History Department of Cornell University’s Industrial and Labor Relations School, Associated Faculty Member at Cornell Law School, and a Faculty Associate at the Berkman Klein Center at Harvard Law School


Ifeoma Ajunwa (@iajunwa) is an Associate Professor of Labor and Employment Law in the Law, Labor Relations, and History Department of Cornell University’s Industrial and Labor Relations School, Associated Faculty Member at Cornell Law School, and a Faculty Associate at the Berkman Klein Center at Harvard Law School

We live in an age where computerized algorithms have become the gatekeepers to our life experience. Much of our daily existence is now curated by algorithmic systems: from what news we see on our social media feeds to what jobs are offered to us. Even as the idée reçue that algorithms are fair because they “rate all individuals in the same way, thus averting discrimination” becomes increasingly challenged, the role of algorithms in employment decisions remains unexplored.

In my recently published article, The Paradox of Automation as Anti-Bias Intervention, I describe what I call the “algorithmic capture” of employment. “Algorithmic capture” refers to the phenomenon when “the belief that algorithms are more efficient and fairer” is coupled with the “abdication of human accountability for undesirable outcomes” resulting from the use of automated systems.

Automated hiring systems are paradoxical. Although often the stated reason for adopting them is to curtail human bias, they frequently end up exacerbating the biases they’re meant to correct. Human managers certainly have biases, but the impact of one biased human manager is quite limited in comparison to the wide-ranging impact of algorithms that could be used as a culling mechanism for thousands of resumes. Thus, a new socio-technical phenomenon of concern here is that given the “volume, velocity, and variety” of data used in automated hiring, any bias introduced in the system will be magnified and stands to be endlessly replicated in the absence of audits. Furthermore, the use of automated decision-making without audits can serve to obfuscate bias, this, combined with an ideology of algorithmic impartiality can make algorithmic bias more insidious. As I have written elsewhere, employment discrimination is a many-headed hydra. The law must remain vigilant and should evolve to fight discrimination each time it rears its head, even when it comes cloaked in software code.

In the past, much hiring was by word of mouth (which is not without its own discriminatory potential) but an unknown job seeker could still make a good impression by meeting the manager in person and pleading their case. Now, job seekers must run the automated algorithms gauntlet and most are dismissed before they can have any audience with human managers. Nearly all Global 500 companies employ e-recruitment and algorithmic screening tools for their hiring needs. An informal survey of the top twenty private employers found that most had their job applications culled by automated hiring platforms. This algorithmic capture of employment algorithms merits legal attention.

To explore some of the problems that algorithmic bias presents, I will start with an old problem that now becomes amplified by automated hiring. Then I will discuss how a new relationship created by automated hiring systems as brokers, necessitates novel legal doctrine.

In the American legal tradition, employers hold wide discretion in selecting employees and often employ their discernment of the “cultural fit” of job candidates. Several legal scholars have already noted the discriminatory potential of the use of cultural fit as hiring criterion.  As Professor Charles Lawrence has noted, “where an employer perceives the white candidate as “more articulate,” “more collegial,” “more thoughtful” or “more charismatic” said employer is “unaware of the learned stereotype that influenced his decision.” Furthermore, cultural fit is an amorphous concept that could include whatever criteria the employer chooses. The human brain can juggle different definitions of cultural fit and is adaptable when confronted with new pieces of information or with circumstances where a particular criterion is irrelevant. Unfortunately, the same is not true for automated hiring systems. The use of automated hiring systems exacerbates the discriminatory potential of “cultural fit” as nebulous concepts of cultural fit are then treated as a hard and fast rules to be applied in all circumstances. Cultural fit becomes operationalized as the bullet points on the resumes of top performers. Such blind application of the rules is what results in a scenario where an audit of an automated hiring system reveals that the name “Jared” and whether the candidate played high school lacrosse are considered the two most important factors for job performance. Given that the name Jared is predominantly associated with white males, this meant that the automated hiring system was practicing racial and gender discrimination through proxy. Thus, I argue for a legal re-evaluation of the use of cultural fit, given that it allows for the introduction of bias and particularly given that automated hiring systems may make this problem worse.

The new relationship between automated hiring systems and job applicants opens up the possibility that employment law, which has long focused on the liability of the employer for employment discrimination, should begin to hold the maker of automated hiring systems legally accountable. To illustrate the nature of this three-sided relationship and the problems it presents, I argue there that automated hiring systems function as tertius bifrons, extending the work of sociologist Georg Simmel. Simmel conceptualized brokers as mediating the relationship—including the flow of information—between two groups. In doing so, they could act as either tertius iungens or tertius gaudens. The tertius iungens (“the third who joins”)  works to foster collaboration between two parties, while the tertius gaudens (“the third who enjoys”) encourages the strategic separation of parties so they can reap benefits from their role as broker.

Here, I offer the theoretical insight that automated hiring platforms, which are customizable at the behest of the employer ­– but never that of the applicant – belong to this new category of brokers that I term the tertius bifrons (that is, “the two-faced third”). The automated hiring systems presents itself as a convenience for the job seeker and this suggested ease of application lulls the job seeker into a false security that the automated hiring platform is working on her behalf. The truth, however, is that automated hiring systems primarily serve as culling systems, and thus work for the benefit of the employer alone. In fact, the advertisements for the first automated hiring system exhorted the employer to use it to “clone your best, most reliable people.” Furthermore, the automated hiring system is not merely an information conduit, rather automated hiring systems enact platform authoritarianism through the design of their user interfaces and the information they demand from job seekers. In introducing the tertius bifrons concept, I seek to highlight that, automated hiring platforms represent a type of broker which works both in its own interest (to maintain its coordinative role) and in the interest of one of the parties to the triad (the employer), while maintaining the appearance of working for both parties (employer and job applicant).

What is the role of the law then in ensuring that the algorithmic capture of hiring does not further widen the chasm of racial inequality? First, employment law should reconsider the allowed use of cultural fit by examining how such nebulous concepts may be concretized by automated hiring systems in ways that serve to further exclude racial minorities. Second, as I detail in An Auditing Imperative for Automated Hiring, the law should consider direct regulation of automated hiring systems through mandated audits, with accompanying record-keeping and data-retention measures. It is worth noting that, thus far, the liability for employment discrimination has, solely rested on the employer – given the tertius bifrons concept, I argue that the law should take up an examination of how the makers of automated hiring systems could also share some liability. Fair competition in the labor market is one of the central tenets of capitalism, yet racial capitalism would see racial minorities hamstrung in that competition. Even as employment discrimination continues to morph with the introduction of new technologies, so, too, should the law change to meet it head to head.

Related Content