Remote Webinar Mar 2024
human resources online

Tackling bias (AI and human) in the recruitment process

閱讀中文版本

Artificial intelligence is a godsend for HR. Coming, as it does, with the power to massively reduce the time taken to complete repetitive, administrative tasks. Its potential in the hiring process is also enormous – but that brings with it a note of caution too.

The fast-increasing application of AI in the recruitment process has stirred a debate about bias and fairness. Part of the problem is that human decision-making in this domain can also be flawed, shaped by individual (and societal) biases that are often unconscious.

Which begs the question: Will decisions made by AI when hiring be less biased than human ones, or will it exacerbate the problem?

According to a report from McKinsey & Company: “AI can help reduce bias, but it can also bake in and scale bias.”

Bias in the human decision-making process is well documented. As a case in point, employers may consider candidates’ credit histories in ways that could be detrimental to minority groups, despite the fact that a link between credit history and on-the-job behaviour has not been established.

Human decisions are also difficult to analyse – people may lie about the factors they considered, or may not understand the factors that impacted on their thinking, introducing the possibility of unconscious bias.

“In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used,” the report stated.

Conversely, there is ample evidence to suggest that AI models can embed human biases and deploy them at scale. It stands to reason the (human-created) underlying data rather than the algorithm itself are often the main source of the issue.

Data generated by humans can also create a feedback loop that leads to bias. For example, research by Latanya Sweeney (a professor of Government and Technology at Harvard) on the racial differences in online ad targeting found that searches for African-American-identifying names tended to result in more ads featuring the word “arrest” than searches for white-identifying names.

That said, it’s absolutely essential to include human judgment to make sure AI-supported decision-making in recruitment is fair.

“While definitions and statistical measures of fairness are certainly helpful, they cannot consider the nuances of the social contexts into which an AI system is deployed, nor the potential issues surrounding how the data was collected,” the report stated.

“Organisations will need to stay up to date to see how and where AI can improve fairness – and where AI systems have struggled.”

With this in mind, here’s a graphic that shows six ways that bias in AI can be reduced:

Follow us on Telegram and on Instagram @humanresourcesonline for all the latest HR and manpower news from around the region!

Related topics

Related articles

Free newsletter

Get the daily lowdown on Asia's top Human Resources stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's Human Resources development – for free.

subscribe now open in new window