Are AI recruitment tools ready for the HR department?

Half of job seekers have had experiences of recruitment bias. AI has its shortfalls, but could advances, tests and regulation help solve the problem?
30 October 2019

Can we reduce human bias in recruitment with AI? Source: Shutterstock.

Artificial intelligence (AI) is the simulation of human processes by machines. 

With that said, AI will pick up the good and bad of human behavior. Despite efforts to prevent biased AI systems by training the algorithm with “good” data sets, chances are, the unconscious bias of people and society will still creep into applications. 

Human bias is normal and generally unintentional, it affects the way people respond and react to a situation— the danger lies when it becomes intensifies through AI systems. 

Today, AI is increasingly being used by HR departments to reduce the workload associated with the recruitment process, particularly in the early stages of filtering through candidates.

In general, AI systems are implemented to shrink a pipeline of candidates, and only a small pool of applications are selected for employers to view. However, some organizations are investing more heavily. 

FMCG giant Unilever claims it saved 100,000 hours of human recruitment time within the last year by deploying software to analyze video interviews.

The system scans graduate candidates’ facial expressions, body language, and word choice, and checks them against traits that are considered to be predictive of job success. The software creator says it has resulted in a more ethnically and gender-diverse workforce.

Similar systems have been used by Vodafone, Singapore Airlines and Intel.

‘Garbage in, garbage out’

Such use of AI draws concern from experts who say that AI may “learn” to seek a group of candidates from a specific demographic because of common characteristics shared by current employees of the role. 

A majority of research concluded the unconscious bias in hiring often affects women, individuals from minority groups, and older workers. This is particularly pervasive in conventionally male-dominated sectors such as science and technology

Less than a decade ago, Amazon’s machine-learning recruiting tool exhibited biases towards female candidates in technical job roles, which led to the company’s abandonment of a fully-automated hiring engine. However, valuable insights were gained from Amazon’s attempt to design a recruitment tool that is effective and unbiased in the hiring process. 

Despite these early challenges, however, many believe AI still promises to become a more impartial way of filtering candidates than humans. 

Statistics from Indeed revealed that almost half of job seekers said they or someone they knew had been subject to recruitment bias. Half (48 percent) of those under 35 believed that automation can make recruitment less biased. As many as a third (36 percent) believed removing humans from the process altogether was the solution.

Eliminating human bias with AI

If AI is to become an essential and, most importantly, ethical recruitment tool, we must first recognize its limitations in making sound judgments without oversight. Practitioners can bring forward a set of designed principles to keep AI on track. 

That means first understanding how human bias can manifest itself in the first place. That’s central to investigations such as that by MeVitae and Oxford University Physics Department, whereby employers’ reactions towards different applications are measured by eye-tracking devices, neuroimaging, and Emotiv Electroencephalogram. 

Developers of AI programs can carry out “safety tests” similar to how car manufacturers require vehicles to pass a safety test before rolling out to the market. AI recruitment tools can be tested before release to ensure defective areas are fixed.

With a focus on combating societal bias in AI programs generally, researchers from IBM are looking into bias-detection technologies. These AI-driven developments are trained to replicate the anti-bias thought process used by people when making decisions. The purpose is to help AI “understand” the concept of human bias and the mechanics in combating it. 

As susceptibility to bias is increasingly removed from AI recruitment programs, it could begin to unravel the implicit judgment employers tend to make when reviewing resumes and lead to more diverse and skilled workforces.

Fundamentally, however, the findings also challenge people to be more aware of preconceived notions and how to best control their influence when making important decisions.