An expert panel discussed the shifting regulatory landscape around artificial intelligence in employment and some basic ways to mitigate risk during the SHRM Workplace Law Forum 2024 on Nov. 20 in Washington, D.C.
While federal action has mostly been at an awareness-raising and guidance level, state legislatures have begun implementing laws aimed at curbing AI-driven discrimination, noted Rachel See, senior counsel at Seyfarth in Washington, D.C.
See said the U.S. Equal Employment Opportunity Commission (EEOC) has stated that AI enforcement is a strategic priority; it has been receiving discrimination charges related to AI and other workplace technologies; and it is interested in investigating and potentially filing litigation, although it hasn’t done much on that front so far.
The EEOC has warned about the implications of AI and algorithmic bias in employment decisions and issued a 2023 technical guidance for employers on how to measure adverse impact when employment selection tools use AI. The agency also filed an amicus brief supporting the plaintiff in a 2023 lawsuit alleging that HR software vendor Workday is directly liable for unlawful employment discrimination caused by an employer’s use of Workday’s AI-powered hiring technology.
Some AI policy experts said the incoming Trump administration will rescind President Joe Biden’s October 2023 executive order on AI and replace it with a more hands-off approach to spark more innovation.
“Federal legislation regulating AI is not expected anytime soon, and the Trump administration has signaled more interest in prioritizing AI research and funding, and not as much on regulating AI in employment,” See said. “In the absence of federal movement, that really motivates state legislatures to do something.”
As more cities and states introduce significant compliance burdens and legal risks for employers, a “very complicated compliance regime” will emerge, she predicted.
“We are at one of the scariest times for all of us operating in the AI space, because there is more unknown than there is known,” said Mike Childers, senior corporate counsel at Amazon. “We are all trying to catch up on understanding the tech while also thinking about the new requirements that are coming with these laws. Unless your company is willing to engage in digital isolationism, you will be subject to laws outside of where you physically exist.”
Jone McFadden Papinchock, director of litigation services at DCI Consulting in Washington, D.C., agreed, saying employers using AI for any employment decisions must be aware of state laws everywhere, not just in the state where they are located. That’s because they are likely attracting applicants from a jurisdiction that has an AI law in place.
“We’ve gotten a taste of these AI laws in New York City, Illinois, and most recently in Colorado,” she said. “Colorado has a comprehensive overview of expectations. It’s a consumer protection law applied to employment settings.”
State AI laws related to employment typically address:
- Notifying applicants and employees of AI use.
- Gathering consent before use.
- Being transparent about the technology and disclosing how it works.
- Taking steps to avoid algorithmic discrimination.
- Completing impact assessments and audits of AI systems and outcomes.
- Implementing AI risk management policies.
Something notable about the Colorado law that takes effect in 2026, Papinchock said, is that an employer will need to notify and explain to a candidate passed over for a job or promotion what information was used, how it was used, and why the individual was not selected.
“That’s new,” she said. “Employers haven’t really had to explain why someone didn’t get a job before.”
Childers said another place to look for what may be coming down the pike is the EU AI Act, which took effect in August and will be applied in phases. The law divides AI into categories of risk, and AI systems considered high-risk—such as those used in biometrics, employment, and management of workers—will have to comply with strict requirements.
“If you are not physically located in the EU, the risk of a regulator showing up at your door is probably low, but if you have operations in the EU, you should be complying with this law,” Childers said. “One of the first obligations for employers is supporting AI literacy in your workforce. That means explaining what the technology is, getting your workforce to understand what is acceptable and not acceptable, and how to stop misuse of it.”
Additionally, there are other EU laws that regulate AI, and laws are being considered in Brazil, Canada, and China.
Steps for HR
Nicholas Truxal, manager of organization and people analytics at Andersen Corp. in Bayport, Minn., said HR “does not have the luxury to say, ‘We will not engage with AI.’ It’s a powerful tool, and one that needs to be explored. Just remember that the laws have not changed regarding discrimination in hiring and employment.”
He said that safe use cases for AI include translating text, writing initial drafts of job descriptions, and scheduling.
“When you start automating decisions about other people, I get more nervous,” Truxal said. “Generally, look at the AI and try to gauge if it is making choices that you support, or that would be considered discriminatory. Try to understand where the data comes from, and make sure you look at the outputs. Be transparent where appropriate, and try to be explainable.”
Childers said there is currently a “mad rush” concerning AI. “Everyone is trying to get in on AI for the sake of not getting left behind, and that’s not necessarily the best way to approach it,” he said. “AI allows you to process so much more data so quickly, that any incidental disparities get over two standard deviations very quickly.”
A significant difference in outcomes between groups is considered potentially discriminatory if it exceeds two standard deviations from the mean, a typical consideration when assessing disparate impact.
“The legal risk becomes not just what if the tool doesn’t function how it is supposed to. It is also what if it functions exactly like it’s supposed to, but we don’t have a validation report in place when trying to defend disparate impact,” Childers explained.
There is growing recognition that AI tools need serious vetting before being purchased for the workplace, Papinchock said. “It’s good to talk to the marketing folks, but you should also talk to a data scientist,” she said. “There was a time that vendors would not open up the back end for your perusal, but now the ones who are ethical will do that.”
To that end, See recommended asking provocative questions when shopping for AI, such as, “How does it work?” “How do you know it works?” and “How do we explain it if it’s wrong?”
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.