The promise of A.I. in recruitment is significant. The technology could reduce biases, provide objective assessments, and even speed up many components of the recruitment process. But is that true? Recent research from Monash Business School presents a more complex picture. In this study, over 700 applicants for a web designer position were informed whether their application was assessed by a human or by A.I. The results were striking: women showed a clear preference for A.I. in recruitment processes, while men preferred a human evaluator.
Women are more concerned that a human recruiter might unjustly disadvantage them.
“Women were significantly more likely to complete their applications if they knew that A.I. would be involved, while men were less likely to apply,” said Professor Andreas Leibbrandt. This is interesting, as women seem to assume that an algorithm is a fairer evaluator than a human. Why? The answer likely lies in experiences with human biases, according to Leibbrandt. Women fear that a human recruiter may unjustly disadvantage them, for example, due to gender biases. A.I. then appears as a neutral (or better: a more neutral) alternative.
Unjust Disadvantage
A second experiment focused on the behavior of 500 (tech) recruiters. They were presented with applications where they sometimes knew the gender of the candidate and sometimes did not. They were also provided with the evaluation from an A.I. system as a reference in certain cases. What did the study reveal? When the gender was hidden, or when the recruiters only had the A.I. score, gender bias largely disappeared. “When recruiters knew the gender of the applicant, they consistently rated women lower than men. However, this bias completely vanished when the applicant’s gender was concealed,” said Leibbrandt.
‘The gender bias completely disappeared when the applicant’s gender was hidden.’
When recruiters had access to both the A.I. score and the applicant’s gender, there was also no difference in scoring between men and women. “These findings show that recruiters use A.I. as a tool and anchor—it helps remove gender bias in assessments.” From this, one might conclude: A.I. promotes objectivity. However, we must be cautious with this conclusion, Leibbrandt himself warns. The Monash Business School research mainly focused on the interaction between humans and machines, not on the algorithms behind the A.I. itself.
A.I. may appear neutral on paper, but in practice, this is not always the case. Take, for example, the finding that algorithms rate CVs lower if there is a two-year gap for parental leave. If that’s not enough of a reason for concern: this also applies to resume blinding, where such personal information has been removed. The algorithm still recognizes subtle cues that it then associates with gender. These ‘hidden’ biases in A.I. arise because many algorithms are trained on historical data. An A.I. system trained on data from a male-dominated sector implicitly associates male qualities with ‘success.’ As a result, subtle forms of discrimination can still seep through, even without explicit terms like ‘man’ or ‘woman’ appearing in the dataset.
More Diversity in the Tech Industry
The solution to this problem lies not only with the algorithm itself but also with the people who develop it. The tech industry faces a significant gender gap: only 20% of technical roles at A.I. companies worldwide are held by women. More diversity in A.I. teams improves the representativeness of training data and ensures more nuance in the development of algorithms. More women in the tech industry, therefore, helps develop more inclusive and fair A.I. systems. But as long as this is not the case, bias and blind spots remain a significant risk.
More women in the tech industry help develop fairer A.I. systems.
As a recruiter or HR professional, you are, of course, dealing with these technologies. How do you use A.I. safely and responsibly? The first step towards the responsible use of A.I. in recruitment is to build a foundational knowledge of how this technology works and where its limitations lie. Then you can use A.I. as a tool, but always with a critical eye—the same critical eye you are already accustomed to using when evaluating candidates.