SUBSCRIBE: Newsletter

Human Resources



Tackling bias (AI and human) in the recruitment process

HR Vendors of the Year Awards is back again for its 5th year with a fascinating gala night to celebrate the best HR vendors in Hong Kong. Winning is both an affirmation of the exceptional quality of your work in the industry and among peers. Enter Awards now
Contact us now for more details.


Artificial intelligence is a godsend for HR. Coming, as it does, with the power to massively reduce the time taken to complete repetitive, administrative tasks. Its potential in the hiring process is also enormous – but that brings with it a note of caution too.

The fast-increasing application of AI in the recruitment process has stirred a debate about bias and fairness. Part of the problem is that human decision-making in this domain can also be flawed, shaped by individual (and societal) biases that are often unconscious.

Which begs the question: Will decisions made by AI when hiring be less biased than human ones, or will it exacerbate the problem?

According to a report from McKinsey & Company: “AI can help reduce bias, but it can also bake in and scale bias.”

Bias in the human decision-making process is well documented. As a case in point, employers may consider candidates’ credit histories in ways that could be detrimental to minority groups, despite the fact that a link between credit history and on-the-job behaviour has not been established.

Human decisions are also difficult to analyse – people may lie about the factors they considered, or may not understand the factors that impacted on their thinking, introducing the possibility of unconscious bias.

“In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used,” the report stated.

Conversely, there is ample evidence to suggest that AI models can embed human biases and deploy them at scale. It stands to reason the (human-created) underlying data rather than the algorithm itself are often the main source of the issue.

Data generated by humans can also create a feedback loop that leads to bias. For example, research by Latanya Sweeney (a professor of Government and Technology at Harvard) on the racial differences in online ad targeting found that searches for African-American-identifying names tended to result in more ads featuring the word “arrest” than searches for white-identifying names.

That said, it’s absolutely essential to include human judgment to make sure AI-supported decision-making in recruitment is fair.

“While definitions and statistical measures of fairness are certainly helpful, they cannot consider the nuances of the social contexts into which an AI system is deployed, nor the potential issues surrounding how the data was collected,” the report stated.

“Organisations will need to stay up to date to see how and where AI can improve fairness – and where AI systems have struggled.”

With this in mind, here’s a graphic that shows six ways that bias in AI can be reduced:

Uncover and learn about complex HR innovation tools and strategies at Accelerate HR from Thailand's largest employers including Agoda, DKSH, Fonterra, FWD, Kasikornbank, Minor Food, Nissan Motor and more.
Happening in Bangkok on 26-27 November, early-bird tickets are still available.

Read More News