Why AI in recruitment is seen as 'high risk' (and what are the consequences)

The recent decision by the European Union to classify all AI used in recruitment as ‘high risk‘ raises important questions. Should professionals welcome this development with optimism, or should it be a cause for concern? This classification prompts a need for careful consideration of the implications, both positive and negative, for those utilizing AI in recruitment processes. 

Nonso Onowu on December 15, 2023 Average reading time: 4 min
Share this article:
Why AI in recruitment is seen as 'high risk' (and what are the consequences)

The agreement on artificial intelligence rules in Europe, reached by the European Parliament and the European Council, received a lot of attention. This agreement was the result of 36 hours of continuous negotiations. It was a notable event, as it involved governments actively deciding to regulate the legal aspects associated with the fast advancement of AI. A critical topic of these discussions was the use of AI in recruitment processes.

There will be strict requirements for the AI used to select people. 

The result of the agreement is that the use of AI in recruitment is now labeled as a ‘high risk‘ AI system. What this means in detail will be further discussed in the coming months. However, it’s already known that there will be strict rules, including: systems to reduce risk, high-quality data sets, recording of activities, in-depth documentation, clear information for users, human oversight, and a strong focus on robustness, accuracy, and cyber security. 

Heavy fines
Under the new European regulations, large fines have been set as is the case with EU regulations. These fines can be up to €35 million or 7% of the global turnover, or €7.5 million euros or 1.5% of turnover, depending on how serious the offense is and the size of the organization.  

For those in recruitment, the rules specifically mention AI systems used for matching candidates, as well as AI for biometric identification and emotion recognition. It means that job candidates must be informed if such technologies are being used. Additionally, if a candidate is interacting with a chatbot instead of a real person, they must be clearly informed about this. 

Broadcasters and employers should bear in mind that they are in the crosshairs of these new regulations. 

John Nurthen, the Executive Director of Global Research at Staffing Industry Analysts, emphasizes the significance of this new step. He advises that recruitment agencies and employers should be very careful as these new regulations particularly affect them. They need to be extra cautious when using AI technology in any part of the recruitment process.

It’s important that they make sure their suppliers are following the law correctly to minimize risks. This includes having control measures in place and ensuring that the training data sets are relevant, representative, free of errors, and capable of identifying biases. The software used should be transparent, with event logs and technical documentation readily available. Additionally, AI processes must always be monitored by humans. 

AI Pact 

The political agreement must now be formally approved by the European Parliament and the Council and will enter into force 20 days after its publication in the Official Journal . The AI law will then apply 2 years after its entry into force, with the exception of some specific provisions: bans will apply after 6 months, while the rules for AI for general purposes will only apply after 12 months.

To bridge the transition period before the regulation becomes generally applicable, the European Commission is launching an ‘AI Pact’ with AI developers from Europe and the rest of the world to voluntarily comply with the key obligations of the AI law before the legal deadlines to implement.

Companies are absolutely not aware of this yet.
 

The legislation will have significant consequences. “Employers who use AI tools in the employment context must set up a whole process to check whether that technology meets the requirements,” employment law lawyer Johanne Boelhouwer and partner at Dentons told MT/Sprout earlier this year . ‘Companies are absolutely not aware of this yet.’ 

Save logs 

For AI system developers, the new rules include requirements for registration, transparency, a conformity assessment, and obtaining certification for the familiar CE label. For employers, it’s about making sure their software suppliers meet these legal standards.

As Boelhouwer stated, employers should not use systems that haven’t passed this assessment. This means they also need to keep an eye on the system. Furthermore, employers are now required to keep the logs created by these systems. As a result, they will face additional administrative duties.

Employers will soon even be obliged to keep the logs that are generated.
 

If it is discovered that such a system confirms or reinforces prejudices, it must be adjusted. Companies will have to continuously keep an eye on this, Boelhouwer knows. ‘AI systems are self-learning. If you have a vacancy for a taxi driver and initially only middle-aged men respond, the algorithms will from now on highlight those types of CVs. This must be reported to the system supplier and corrected.’

Game changer
The new regulation is more than just a set of rules, according to Hubert.ai, a Swedish company that provides interview software. They describe it as a game-changer that directs HR professionals towards a future where AI and responsibility go hand in hand.

They also see it as an opportunity to improve selection methods, moving from gut-feeling decisions to data-driven, responsible, unbiased, and transparent matching. In this view, AI has the potential to make the recruitment process better, fairer, and more efficient. And, as they suggest, the AI Act reinforces this potential. 

The recruitment landscape is undergoing a fundamental shift as a result of the upcoming EU AI Act.

The recruitment landscape is undergoing a fundamental shift as a result of the upcoming EU AI Act. This regulation emphasizes transparency, accountability and fairness in the use of AI and influences HR practices towards unbiased candidate selection in the future and a better experience for all.” This is also in line with SHRM research , which previously showed that almost 3 out of 5 organizations that use AI in their recruitment indicate that they have since hired better people. 

$365,000 settlement 

A recent case in the United States highlighted the risks of using AI systems in recruitment. A tutoring company used an algorithm that reportedly automatically rejected female applicants over 55 years old and male applicants over 60 years old. The Equal Employment Opportunity Commission (EEOC) sued on behalf of the nearly 200 applicants who were rejected. The company agreed to settle the lawsuit for $365,000, which amounts to almost $2,000 per applicant. This situation raises questions about how different the selection process might have been if a human recruiter had been used instead of an algorithm. 

 Credit photo above 

Read More:

If you liked this article and want more insights on attracting and retaining the best talent in Europe, subscribe to ToTalent’s weekly newsletter. You’ll get exclusive content, events, and expert insights.

 

 

 

Share this article:

Premium partners View all partners

Intelligence Group
Ravecruitment
Recruitment Tech
Timetohire
Werf&

Read the newsletter about total talent acquisition.