HireVue Discontinues Facial Analysis Screening
Decision reflects re-examination of AI hiring tools
HireVue, a well-known video interview and assessment vendor, announced in January that it has removed the facial analysis component from its screening assessments as concerns about the transparent and appropriate use of artificial intelligence in employment decisions grow.
The Salt Lake City-based company said the controversial feature—which used algorithms to assign certain traits and qualities to job applicants' facial expressions in video interviews—was discontinued in March 2020 after internal research demonstrated that advances in natural language processing had increased the predictive power of language analysis and "visual analysis no longer significantly added value to the assessments."
HireVue's platform has hosted more than 19 million video interviews for over 700 customers worldwide. The product is most often used as an automated screening tool at the start of the hiring process for high-volume employers. The structured interviews are typically based on a customized job analysis of the role and ask job applicants to respond to a series of questions in a recorded video. HireVue's software assesses the applicant's suitability for a role and is meant to reduce individual human interviewers' intrinsic biases when rating candidates. It analyzes various characteristics of the video interview, including applicants' responses, speech and, until last year, facial expressions.
"Over time the visual components contributed less to the assessment to the point where there was so much public concern to everything related to AI that it wasn't worth the concern it was causing people," said HireVue CEO and Chairman Kevin Parker.
Facial muscle movements, like furrowing one's brow or smiling, were previously evaluated, explained Lindsey Zuloaga, the chief data scientist at HireVue. "Our research shows that a lot of the time what people say and what their face does align closely," she said. "Even though in certain customer-facing roles there was a correlation between facial muscle movements and job performance—where smiling might be important, for example—most of the time that information wasn't necessary."
Zuloaga added that candidates were never evaluated based on "sensitive or personal" visual attributes.
Advocates for more transparency in AI-technology use welcomed the news that HireVue, a leader in the space, had dropped facial analysis.
"Facial analysis has never been an independently and scientifically validated predictor of a person's ability, capacity or success in a role," said Merve Hickok, SHRM-SCP, a lecturer and speaker on AI ethics, bias and governance, former senior HR leader, and founder of Lighthouse Career Consulting in Ann Arbor, Mich.
"Facial expressions are not universal—they can change due to culture, context and disabilities—and they can also be gamed," she said. "So, accuracy in correctly categorizing an expression is problematic to start with, let alone inferring traits from it."
Julia Stoyanovich, an assistant professor of computer science at New York University's Tandon School of Engineering and the founding director of the school's Center for Responsible AI, added that facial recognition technology is being questioned and barred across multiple domains. "I'm glad to see that hiring is one of those because of the potential harms," she said. "We should not be relying on signal features when screening that have nothing to do with job performance. I'm glad that there's a lot of interest in evaluating these tools and making steps toward oversight of automated decision systems used in hiring. If we can put strong regulation in place, then we will be much better off."
[SHRM members-only toolkit: Screening by Means of Pre-Employment Testing]
Gaining Attention
Vendors like HireVue have come under scrutiny from an increasingly outspoken group of academics, technology ethicists and regulators raising questions about the growing array of new digital tools employers are using in HR processes.
"Proponents of new technologies assert that digital tools eliminate bias and discrimination by attempting to remove humans from the process, but technology is not developed or used in a vacuum," said Rep. Suzanne Bonamici, D-Ore. "A growing body of evidence suggests that left unchecked, digital tools can absorb and replicate systemic biases that are ingrained in the environment in which they are designed."
Selection assessments that rely on algorithmic decision-making also trouble former chair of the U.S. Equal Employment Opportunity Commission Jenny Yang. "The complexity and opacity of many algorithmic systems often make it difficult if not impossible to understand the reason a selection decision was made," said Yang, who was recently chosen to lead the U.S. Department of Labor's Office of Federal Contract Compliance Programs, which regulates employee selection processes, including assessments, used by federal contractors.
"Many systems operate as a black box, meaning vendors of algorithmic systems do not disclose how inputs lead to a decision," she said.
Parker said that HireVue's algorithms could be made public, but publishing them wouldn't be helpful. "There are thousands of customized models at different companies and they are not interchangeable," he said. "Each algorithm starts with a job analysis for a particular role at a particular company."
He explained that the technology's main value is giving employers the capability to conduct structured interviews quickly on a massive scale. "We have customers interviewing thousands of people a day, across hundreds of retail locations across the U.S. It is important for DE&I [diversity, equity and inclusion] that all of those candidates have the same screening experience and there's a consistent way to evaluate their responses. You can't interview 1,000 people the same way manually. You couldn't evaluate all the interviews consistently, but algorithms can."
Candidates are generally presented to employers in group tiers, based on the likelihood of success in the role, not individually scored, he added.
"The idea that we're just watching you talk and the way you talk and that's telling us who you are is wrong," Zuloaga said. "That is the way it is portrayed sometimes and that would be a horrible model if it were true. That's not what we are doing."
HireVue also released the results of an audit conducted by O'Neil Risk Consulting and Algorithmic Auditing, which concluded that HireVue's assessments "work as advertised with regard to fairness and bias issues."
The audit was performed on a representative use case of pre-built assessments developed with HireVue's methodology used in hiring early-career candidates and was not a comprehensive audit of the company's algorithms, the firm clarified.
"We engaged O'Neil to look at different stakeholder concerns and how to address those concerns," Zuloaga said. "We looked at what we may be doing that is harmful or that people may perceive as harmful. We are continuing to work on improving the speech analysis component and making the whole assessment process clearer to candidates and are currently undergoing additional audits, including one on adverse impact."
Parker said the audit provided feedback that HireVue had areas to improve but that it was also doing many things well. "We are proud of the work we do and want to make sure that nothing we do is creating adverse impacts to any candidate groups," he said.
The evolving issue has caught the attention of lawmakers at the federal, state and local levels.
Parker said that HireVue is generally very supportive of the emerging legislation encouraging transparency and in improving outcomes. "We're engaging with legislators in the ongoing process to make sure we are listening to their concerns and making sure they are making decisions based on what is actually happening," he said.
The company has released a set of AI ethical principles and set up an expert advisory board on the issue. HireVue also supports the Illinois Artificial Intelligence Video Interview Act, the first law in the nation to require consent from candidates for analysis of video footage.
Concerns Around Speech Analysis
Experts are still concerned about HireVue's use of speech-based analysis in video interviews.
Zuloaga said that analysis of applicants' speech pauses or tonality are small factors in the algorithmic model and are currently being reviewed for bias.
"The assumption that vocal indications, intonations, word choice or word complexity have any credible, causal link with workplace success, to make or inform hiring decisions, is flawed," Hickok said. "Speech recognition software can perform poorly, and natural language processing is not yet capable of understanding nuances in speech or context of a sentence and therefore not able to properly analyze the content and context of an answer."
She added that there might be future developments in natural language processing which would allow employers to use AI to understand the content of text and speech better, which could be used to make connections between experience, transferable skills and job opportunities.
"Speech analysis can be a little misleading," Parker said. "We're primarily assessing the candidate's answer to the questions. We're transcribing the responses to specific questions asked to all candidates at that company into text and understanding the content of the answer. We're not analyzing their accent or their diction."
Positive Outlook
Despite their concerns, experts agree that algorithmic tools have value in HR and should be improved, not eliminated.
"Algorithmic systems could help identify and remove systemic barriers in hiring and employment practices, but to realize this promise we must ensure they are carefully designed to prevent bias and to document and explain decisions necessary to evaluate their reliability and validity," Yang said.
"I think AI technology holds promise for talent acquisition in areas such as workforce analysis, process improvements, candidate communication, and matching skills to jobs or projects," Hickok said. "However, it needs safeguards to be successful and beneficial for candidates and employers alike."
Stoyanovich agreed that going back to screening applications manually "would be ludicrous." She added that "algorithms in talent acquisition can help us articulate and hold processes accountable and hire in accordance with the goals we've formulated. But we shouldn't overestimate or overpromise what they can do."
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.