California Assemblymember Rebecca Bauer-Kahan, chair of the Assembly Privacy and Consumer Protection Committee, plans in 2025 to reintroduce a bill to prevent AI algorithmic discrimination across all business sectors, according to a Dec. 5 interview with Bloomberg. This will be the third version of the bill, which she previously introduced as California Assembly Bill 2930 in February 2024 and as AB 331 in January 2023.
The AI anti-bias bill targets regulation of automated decision tools (ADTs). Such bills are designed to address rising transparency and fairness concerns around the use of artificial intelligence technology to make consequential decisions that can have a legal, material, or similarly significant effect on a person’s life. Similar measures were enacted in Colorado in 2024 (SB 24-205) and in New York City in 2023 (Local Law 144).
Like California’s AB 331, which pertained to multiple business sectors, including housing, employment, education, health care, financial services, and criminal justice, Bauer-Kahan told Bloomberg that the new bill would apply to all areas of potential discrimination, even if there was pushback. “Developers best understand the technical capabilities of their tools. And those creating AI tools need to bear some responsibility for their effects,” she said.
Bauer-Kahan, a Democrat, also intends to address a lack of clarity regarding key definitions in AB 2930, which Bradford Newman, chair of the North American Trade Secrets Practice and leader of the AI & Blockchain Practice at Baker McKenzie in Palo Alto, Calif., said were vague in previous bills. “If enacted, AB 2930 would definitely have been challenged in the courts,” he said.
Additionally, Bauer-Kahan plans to align her bill with a federal bill, S. 5152 by Sen. Ed Markey, D-Mass., and said it will mainly rely on government enforcement, another change from AB 2930. The new bill may also address cost and enforcement concerns related to existing federal and state nondiscrimination laws.
Potential HR Implications of Previous California Bills
“Had AB 2930 been enacted, it would have imposed substantial requirements on employers using ADTs to make consequential employment decisions related to pay or promotion, hiring, or termination,” said attorney Danielle Ochs, San Franciso shareholder and co-chair of the Technology Practice Group at Ogletree Deakins.
Bauer-Kahan’s previous bills required developers and deployers of AI tools to:
- Put in place extensive risk mitigation measures focused on auditing requirements, also known as impact assessments, before the tools could be used.
- Establish the right of individuals to know in advance when an ADT is being used with the option to opt out.
- Provide a pathway for individuals to take legal action if they believed they were discriminated against by an ADT.
“The primary concern with AB 2930 was that it would have required impact assessments be performed for any ADT the deployer used, regardless of whether the use of the tool posed any significant risk,” said attorney Chris Micheli of Snodgrass & Micheli in Sacramento, Calif. “The fundamental concern for the HR professional is how to comply with such a bill.”” There was also the question of potential liability for individuals working in HR and using an ADT.
“AB 2930 would have imposed several new and onerous obligations on HR professionals who work for ADT deployers or developers,” Newman said.
SHRM and the California State Council of SHRM (CalSHRM) expressed opposition to AB 2930 in an April 2024 letter to Bauer-Kahan. The letter detailed how the legislation would chill the deployment and development of AI tools in California, potentially stifling innovation in HR, and lacked effective small business protections. SHRM research shows that 64% of employers are currently using AI in HR-related activities to support their recruiting, interviewing, and hiring processes, which saves them time, increases their efficiency, reduces their cost, and improves their ability to reduce bias in hiring.
“The key to policies that support workplace and workforce innovation is to implement a balanced approach that safeguards job candidates and employees’ rights while enabling businesses to use tools that will lead to better workforce decisions,” said Emily M. Dickens, J.D., chief of staff, head of government affairs, and corporate secretary for SHRM, and Michael S. Kalt, J.D., former Government Affairs Director for CalSHRM, in the letter.
“Supporters of AB 2930 argued that it would enact commonsense guardrails to help ensure that developers and deployers of these tools are obligated to test and mitigate for discriminatory outcomes prior to the sale or use of these tools,” said Ochs. Opponents argued that a risk-based approach (used in the Colorado law) is necessary to avoid overbroad regulations and costly new mandates.
Twelve organizations supported AB 2930, including the California Employment Lawyers Association, the Center for Democracy and Technology, Legal Aid at Work, and TechEquity Collaborative. Thirty industry associations, including the American Council of Life Insurers, the California Medical Association, Google, and Verizon Communications, joined SHRM in opposing the bill.
Outlook for Passage of Bills in 2025
Bauer-Kahan pulled AB 2930 before it could reach Gov. Gavin Newsom’s desk when the bill, estimated to cost billions of dollars and facing significant opposition from the tech industry, was narrowed to apply only to employment use cases as a cost-cutting measure during debate on the California Senate floor. Micheli expects that a similar bill in 2025 would make it at least that far again and perhaps reach the governor’s desk.
Newman agreed that a similar bill is probably more likely to pass this time around, though Ochs said it is not yet clear exactly how the bill will differ and therefore passage remains uncertain.
Newman expects to see a proliferation of state and local regulations being enacted unless the federal government introduces pre-emptive AI policies. Although federal agencies such as the Equal Opportunity Employment Commission (EEOC) have been very busy in this space, “it remains to be seen whether, under the new administration, these agencies will continue to actively pursue regulation and/or guidance of automated employment decision tools,” Ochs noted.
Strategies to Counter Algorithmic Bias
Regardless of any legislation, it’s important to “always have humans overseeing decisions,” said Kelly Dobbs Bunting, an attorney with Greenberg Traurig in Philadelphia, at the SHRM Annual Conference & Expo 2024.
Causes of bias in algorithms can be traced to a few major underlying factors:
- Algorithms rely on their training data, and biases present in that data will be encoded into machine learning.
- The onus lies on the teams that construct algorithms, which can be homogenous, leading to blind spots in recognizing biases.
- Coding errors randomly contribute to bias, revealing the importance of thorough testing.
- Even when a given algorithm is free of bias at the outset, it can become biased over time by amplifying feedback loops that distort decision-making toward discrimination.
Strategies for addressing bias in HR tech include:
- Having transparent and explainable algorithms through technical documentation, visualization dashboards, and other tools such as multidimensional review boards for candidates who have entered the hiring funnel.
- Ensuring diverse and representative datasets by purging sets of information that allow discrimination, such as names and demographics.
- Conducting continuous monitoring and auditing of algorithms through technical metrics, ethical oversight teams and regular impact assessments.
Before introducing algorithmic solutions to job candidates, HR should be prepared to ask tough questions and conduct audits. They can also create specific guidelines that ensure a fairness-first approach toward HR technologies.
Check out this upcoming event: New Year New Laws: California 2025 Legal & Compliance Program
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.