
You can use Textio to help hiring managers optimise job descriptions. When that ad is completed, you can use AI to target likely applicants via Facebook, LinkedIn or Ziprecruiter. Once you’ve generated some type of response to your job posting, you can use a resume scanner — like CVviz, Skillate or Recruitment Smart. After which you use HireVue or Modern Hire to analyse those candidates in the actual job interview.
As more organisations rely heavily on artificial intelligence, the lawmakers are slowly catching up.
More AI-based solutions are out there and more organisations are using them. As more organisations rely heavily on artificial intelligence, the lawmakers are slowly catching up. General artificial intelligence bills or resolutions were introduced in at least 17 US states in 2021. And enacted in Alabama, Colorado, Illinois and Mississippi. But perhaps none are as extensive, or ‘bold’, as the law recently passed by New York City lawmakers.
The AI crackdown in New York
From January 1, 2023, New York City will regulate the usage of so-called automated employment decision tools. “The term means any computational process, derived from machine learning, statistical modelling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
Employers or employment agencies can still use artificial intelligence-based decision tools. But only if such tool has the subject of a ‘bias audit’ by an independent auditor one year prior to the use. The auditor will test tools to assess the tool’s ‘disparate impact’. In simpler terms: any type of discrimination against protected groups on the basis of race, age, religion, sex, or national origin.
Companies have to inform candidates which job qualifications and characteristics the tool will use.
With the new law, companies will also be required to notify employees or candidates if any such tools are used in the recruitment process. Companies must inform candidates that it is used, to allow a candidate to request an alternative selection process. Moreover, companies have to inform candidates which job qualifications and characteristics the tool will use.
Europe’s risk-based approach
Whether we will see similar bias audits in Europe, remains to be seen. Europe’s intentions seem to cut bias in the usage AI off at the source — rather than focusing on AI as a whole. “We think that a risk-based approach is the only way to regulate AI”, said Francesca Rossi, IBM AI Ethics Global Leader during a CEPC Think Tank session. “But you don’t want to regulate AI, you want to regulate the AI systems and applications.”
In April 2021, the EU proposed what was in essence a combination of the first-ever legal framework on AI and a coordinated plan that guarantees the safety and fundamental rights of people and businesses. AI systems used in the employment, workers management and access to self-employment were put on the EU’s ‘high-risk’ list.
“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”
That list entails that any software that is used in a recruitment procedure will be subject to strict obligations before they can be put on the market. “By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way”, Margrethe Vestager, Executive Vice-President for A Europe fit for the Digital Age, commented.