From unlocking your phone to automated passport border control. You’ve heard or likely seen it: facial recognition software. For recruitment, it used to analyse candidates during video interviews. The main player in this space? HireVue, a digital recruiting company, whose enterprise-level hiring platform includes assessments, automated chatbots, scheduling and video interviews.
The lawsuit comes at a time when the implementation of AI has skyrocketed. According to a survey by PwC, 86% of U.S. corporations say that AI will be a ‘mainstream technology’ at their company in 2021. In the wake of the COVID-19 crisis, 52% of survey respondents say they have accelerated their AI adoption plans.
Illinois versus AI
So, what has happened? On January 27, 2022, a US resident by the name of Kristen Deyerler who applied for a job in 2019 filed a class action complaint in Illinois state court against HireVue, alleging violations of BIPA, a 2008 passed the Biometric Information Privacy Act. In 2019, the state also became the first state to regulate the use of algorithms and other forms of AI to analyse applicants during video interviews with the introduction of the Artificial Intelligence Video Interview Act (AIVI).
The complaint alleges HireVue failed to provide a publicly available retention schedule and/or guidelines for permanently destroying the biometrics.
Now, years after Illinois’ first act, it is once more front and centre of the AI crackdown. The complaint is threefold. It claims that the company illegally collected facial data for analysis without written permission when the plaintiff interviewed for a job through HireVue’s online platform. It also accuses the company of profiting from her and other potential class members’ facial biometric identifiers. Finally, the complaint alleges HireVue failed to provide a publicly available retention schedule and/or guidelines for permanently destroying the biometrics.
Chad and Cheese co-host Chad Sowash pulled no punches when describing the HireVue conflict. “I find this completely repulsive to think that an algorithm could standardise the entire human race through facial movements”, he said. “This to me is, again, an ultimate bias machine. Federal government really needs to step in. Because if you’re not a resident of Illinois, I don’t believe you’re going to be able to actually partake in this suit.”
“So facial recognition and analysing facial responses in the hiring process, is officially dead to me now.”
Joel Cheesman doubled down on that notion. “So facial recognition and analysing facial responses in the hiring process, is officially dead to me now”, he said. “No matter what happens with this case, which will likely be settled out of court, more states are going to introduce laws like Illinois. If not federal legislation. No vendor is going to want this smoke and no company is going to use such services for risks of lawsuits.”
The buck, however, may not just stop at repulsive. According to Meredith Whittaker, the co-founder and co-director of the AI Now Institute at New York University (NYU) and a senior advisor for the Federal Trade Commission on AI, it may even be pseudoscience. Whittaker has long been outspoken about the ethical impact of AI on society. Throughout the years, she has explained that facial analyses may claim to measure elements like ‘personality’ and ‘worker engagement’, but that they are not backed by robust scientific evidence.
“The technology reflects discredited pseudoscientific practices from the past.”
“The assertion that it’s possible to determine a person’s interior characteristics based on their facial expression through affect recognition is not backed by scientific consensus”, she wrote in 2020. “The technology reflects discredited pseudoscientific practices from the past, including physiognomy, phrenology, and race science. Which all interpreted physical differences between people as signs of their inner worth and used this to justify social inequality.”