Biases in recruitment and hiring are a real problem.
In addition to being just plain unfair, hiring biases can ultimately impact a business’s bottom line. Research from McKinsey shows that companies with diverse workforces consistently perform better than those with homogeneous workforces.
Although it’s not yet a perfect solution, artificial intelligence (AI) is proving to be a valuable tool for reducing bias in hiring. Here, we’ll take a look at how unconscious biases affect the hiring process, and the efficacy of AI in reducing hiring bias.
Is bias really a problem in hiring?
Yup. And you don’t have to look far for examples.
In 2017, Palantir paid $1.7 million to settle a lawsuit from the U.S. Department of Labor, which accused Palantir of disproportionately turning down qualified Asian candidates who applied for certain engineering positions.
Although Palantir disagreed with the allegations and denied knowingly discriminating, the numbers showed otherwise: For one software engineer job, Palantir hired 14 non-Asian engineers and 11 Asian engineers, even though 85% of the 1,160 applicants were Asian.
The Palantir case is a prime example of unconscious bias in hiring, and it’s not an isolated incident. Research from the University of Toronto shows that candidates with Asian names are 28% less likely to get an interview than equally qualified candidates with Anglo-Canadian names.
In addition, a 2016 study found that people with resumes containing minority racial cues — such as a distinctively African-American or Asian name — received 30 to 50% fewer callbacks from employers than those who had equivalent resumes without racial cues. When these candidates “whitened” their resumes — concealing or downplaying racial cues — they were significantly more likely to receive a callback, even though their qualifications were unchanged.
Can’t you just train hiring managers to be less biased?
Maybe, but hiring bias isn’t just a problem with people. Hiring processes can also prevent qualified candidates from getting hired.
For example, candidates who were referred by current employees are more likely to be hired than non-referred candidates. But referrals often result in candidates who are very similar to those who referred them, effectively boxing out candidates who don’t already have an in at the company.
The college-to-job pipeline has inherent bias as well: Overburdened hiring managers who don’t have time to sort through the pile of job applications often sort based on the college a candidate attended.
This results in preference for candidates who graduated from traditionally “elite” or “prestigious” colleges…which, of course, can have their own biases in admissions. Even worse, some companies recruit directly from elite universities, actively homogenizing their workforces.
Can AI reduce hiring bias?
AI has the ability to reduce hiring bias, pushing the hiring and recruitment process into a more fair, more diverse future.
AI can be integrated into the hiring process in many ways. AI platforms can help sort resumes by desired qualifications while ignoring demographic data. Conversational AI can be used to collect additional information from candidates. AI tools can also streamline the day-to-day work of a hiring department, freeing up more time for fair consideration
Most promisingly, AI can be used to examine big data sets and identify common traits of successful candidates, giving hiring managers a more reliable heuristic than a candidate’s name, education, or even work history.
But an AI tool is only as effective as the data that goes into it. If the creators of AI solutions aren’t careful, bias can sneak its way into AI-supported decisions.
In other words, AI is not yet a silver bullet for eliminating bias in hiring, but we’re getting there.
Will hiring be more fair in the future? Can AI help? We believe the answer is yes — if companies know what’s good for them.