Whether it’s the recruitment and selection of candidates or the re-skilling process towards a possible promotion, AI is now present in the entire employee lifecycle. As always, the proof lies in the numbers. According to a survey by the PwC, 86% of U.S. corporations say that AI will be a ‘mainstream technology’ at their company in 2021. Moreover, in the wake of the COVID-19 crisis, 52% of PwC’s survey respondents say they have accelerated their AI adoption plans.
71% of the European companies consider AI an ‘important topic on the executive management level’
Meanwhile, similar sounds come out of Europe. According to a survey by EY and Microsoft, 71% of the European companies responded that AI is considered an ‘important topic on the executive management level’. While 57% of European companies expect AI to have a high or a very high impact on business areas that are “entirely unknown to the company today”.
AI has skyrocketed, but regulations remain grounded
As the implementation of AI has skyrocketed, the amount of laws or regulations have remained grounded. In 2020, Illinois became the first-ever state to regulate the increasing usage of artificial intelligence in recruitment practices. The Illinois Artificial Intelligence Video Interview Act (AIVI) cracked down on the use of AI in video interviews to determine if the applicant exhibited the characteristics of ‘successful’ candidates and may provide hiring recommendations.
The AIVI act itself was somewhat vague. It left several terms, including notably ‘artificial intelligence’, undefined.
Under the act, the employer would have to disclose, explain, ask for consent — while the video had to remain confidential and be destructed within 30 days. While Illinois set the bar in the US — the act itself was somewhat vague. It left several terms, including notably ‘artificial intelligence’, undefined.
Europe puts recruitment AI on the ‘high-risk’ list
For Europe, legislation may soon be on its way. In April 2021, the European Union proposed what was in essence a combination of the first-ever legal framework on AI and a coordinated plan that guarantees the safety and fundamental rights of people and businesses. “By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way”, said Margrethe Vestager, Executive Vice-President for A Europe fit for the Digital Age.
Any software that is used in a European recruitment procedure will be subject to strict obligations before they can be put on the market.
Vestager described the plan as ‘future proof and innovation-friendly’, but warned that rules will intervene ‘where strictly needed’. AI systems that are used in the employment, workers management and access to self-employment were put on the EU’s high-risk list. That category entails that any software that is used in a recruitment procedure will be subject to strict obligations before they can be put on the market.
‘It’s happening and needs to be addressed now’
One man keen on changing the AI regulation landscape is Keith Sonderling, commissioner of the U.S. Equal Employment Opportunity Commission (EEOC). “Whether it’s pay equity, age or pregnancy discrimination or the #MeToo movement — the EEOC gets to the core of all of it”, Sonderling told Chad Sowash and Joel Cheesman on the Chad & Cheese Podcast.
“There are so many benefits to using technology in the workplace that I want to see it flourish.”
“It’s not uncommon for commissioners to pick a specific topic and really champion it”, he said. “And for me, that is artificial intelligence in the workplace. There are so many benefits to using technology in the workplace that I want to see it flourish. And not get subject to certain government regulations that are not going to make it work because we’re already too late. It’s happening and it needs to be addressed now.”
‘Technology generally gets very far ahead of the government’
AI has been subject to its fair share of incidents and scrutiny. Amazon’s well-documented mishap in 2018 led to their internally developed hiring tools being scrapped, because it actually discriminated against women. Its developers used the company’s 10 year history of resumes (which were predominantly male). That lead to the software effectively teaching itself that those (male) candidates were preferable. The tool was never used — but illustrated the clear and obvious errors an AI system could be subject to.
“[Part of it is to] not put on burdensome regulations that take it down or subject it to a massive federal investigations or class action lawsuits.”
The main pitfall, according to Sonderling, is that AI technologies are being developed so quickly, it may come back to bite them in butt. “Technology generally gets very far ahead of the government”, Sonderling said. “It’s a time where we can all really work together. Everyone from employee groups, to employers buying and using the software to developers. To create a standard that actually allows these products in this AI to help diversify the workforce. To help get the best candidates. But also not put on burdensome regulations that take it down. Or subject it to a massive federal investigations or class action lawsuits.”
Whose fault is it anyway?
Although Amazon didn’t design its model with misogynistic intent, it was the end result. It is part of a larger issue, according to Chad Sowash: a common misperception of what AI actually is. “Most people misunderstand that the decision AI is making doesn’t stem from AI itself”, he said. “Rather it stems from human decisions. Humans are biased, always have been, always will be.”
“The algorithm spits out: ‘Your ideal applicant is named Jared who played high school lacrosse.’ Whose fault is that?”
That sentiment was echoed by Sonderling, who gave another example of how AI endeavours can go wrong. “One firm said, go find me the ideal applicant. I want to diversify my workforce, here’s my top performer. And the algorithm spits out: ‘Your ideal applicant is named Jared who played high school lacrosse.’ Thank you. I mean what does that say? What does that do? But whose fault is that? That the inputs that gave you the bias, the bias inputs give you the biased outputs.”
“Whether they bought the AI to really diversify their workforce and eliminate bias, or help employees up-skill and re-skill, if it has that discriminatory output, the employer’s on the hook.”
Despite its set of uniform guidelines still stemming from 1960, Sonderling and the EEOC will continue to monitor these cases. But one thing is certain, according to Sonderling: the employers using these tools are liable. “Whether they bought the AI to really diversify their workforce and eliminate bias, or help employees up-skill and re-skill, if it has that discriminatory output, the employer’s on the hook.”