Use of AI in the Workplace Raises Legal Concerns
Experts recommend validating new technologies to mitigate risk
Artificial intelligence tools flooding the HR technology marketplace can bring great value to organizations but also carry significant legal risks if not managed properly. Any employer that wants to use AI in the workplace needs to weigh the benefits against the potential legal risks and must also keep a close eye on emerging laws to stay in compliance, according to two experts speaking at the Society for Human Resource Management Employment Law & Compliance virtual conference.
Eric Dunleavy, director of litigation and employment services at Washington, D.C.-based DCI Consulting Group, and Michelle Duncan, an attorney in the Denver office of Jackson Lewis, said there's been a dramatic increase in clients who want to talk about the use of AI in pre-employment hiring and in HR generally.
"This is not a future-forward issue," Dunleavy said. "AI tools which learn how to perform tasks usually performed by humans are being used today in human resources."
Duncan added that talent acquisition in particular is being driven by AI in the areas of sourcing, recruiting, assessing, hiring and communicating with candidates.
"Employers are looking for increased efficiency in their hiring process, and we're seeing the possibility for quick, measurable results in having AI help screen applicants," she said. "There are AI-related tools which prescreen resumes and applications, evaluate video-recorded interviews, create work simulations and chatbots, and mine social media to determine applicants' digital footprint. The key legal question is whether these tools are being used to make decisions about applicants and employees. If we're using these tools to decide whether or not to move people on through the hiring process or who to promote, we need to pay extra special attention to the impact they have."
Legal Landscape
Duncan said AI technology will be evaluated under the foundational equal employment opportunity laws that HR is already familiar with, like Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, the Americans with Disabilities Act and equivalent state laws. "There isn't any new federal law to evaluate the use of AI in hiring," she said. "Enforcement agencies and plaintiffs' attorneys will figure out a way to fit these innovative tools under the framework that already exists, and employers will have to make sure they are safeguarded against discrimination claims that flow from the use of AI."
Both experts believe that AI technologies will most likely be challenged under the legal framework of disparate impact.
"AI tools will be evaluated the same way as any other selection procedure, meaning enforcement agencies will look at overall applicant-to-hire for adverse impact, and then drill down into a company's selection procedures," Duncan said. "That means they will ask about various tools you're using and the impact those tools have. If it is determined that the tools do result in adverse impact, the burden is on the employer to properly validate that tool, meaning proving that the tool is job-related and required by business necessity. There's also an obligation to evaluate whether there are alternatives which would lessen the adverse impact."
This is where industrial-organizational psychologists can provide particular value, Dunleavy said, because they are trained in conducting validation research. "It's a best practice to validate all AI technologies, not only to mitigate risk but also to make sure the tool actually does what it's supposed to," he said.
Policymakers and regulators are also concerned about the growing array of new sourcing and recruiting platforms powered by AI and machine learning, as well as algorithm-heavy screening and interview software that analyzes and ranks job applicants.
Illinois is currently the only state with a law that covers this new area of HR technology. The Artificial Intelligence Video Interview Act regulates how employers can use AI to analyze video interviews.
Employers in New York City would be required to inform job applicants if and how they are using AI technology in hiring decisions under a bill being considered by the City Council. In addition, AI technology vendors would have to provide bias audits of their products before selling them and offer to perform ongoing audits after purchase. If passed, the bill would take effect Jan. 1, 2022.
Other cities and states and the federal government have introduced initiatives to study bias in algorithms and the impact of AI on employment decisions, and the Equal Employment Opportunity Commission (EEOC) has announced at least two investigations of cases involving alleged algorithmic bias in recruitment.
"Employers need to be on the forefront of making sure that the tools put into place don't get caught in a new pitfall of regulatory interest in algorithmic fairness," Duncan said.
How HR Can Lessen Risk
Dunleavy said it all comes down to HR conducting its due diligence. "Have some understanding of the basics behind these tools," he said. "What's the purpose of the algorithm? What characteristics are being assessed? How is the algorithm used to make decisions? Who developed the algorithm? Has adverse impact been evaluated? Check into the professional competence of the vendor."
Duncan advised HR to take part in the discussion with vendors and "ask the hard questions about transparency and validity. If they balk at that, that raises a red flag for me."
She added that "unfortunately, some vendors are out there claiming that tools are EEOC- or OFCCP [Office of Federal Contract Compliance Programs]-certified, but that designation does not exist. That's why it's so important to think through a partnership with potential vendors. It will be the employer's responsibility to address a discrimination claim."
"This is an expert-intensive area," she said. "You should expect to find neutral, third-party experts to help you. This is too high-stakes not to get expertise to determine if there's risk in the tool being used."
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement