On Nov. 8, 2024, the California Privacy Protection Agency (CPPA) voted to advance proposed regulations concerning automated decision-making technology (ADMT). While the comment period is ongoing and there aren’t final rules yet, some key provisions can help businesses begin to assess the potential effects of these rules if made final in their current state. This article looks at what ADMT means.
What Is ADMT?
According to the proposed regulation, ADMT would mean: “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.”
The first thing to note is that, for the purposes of these proposed regulations, an ADMT under the California Consumer Privacy Act of 2018 (CCPA) proposed rules must involve the processing of personal information. Under the CCPA, however, while personal information is defined broadly, there are several exceptions. One is that neither de-identified nor aggregate consumer information constitute personal information. Another is that protected health information covered by the Health Insurance Portability and Accountability Act (HIPAA) is not considered personal information. And there are other exceptions to consider.
Understanding these exceptions may help businesses narrow the impact of these regulations on their organizations. For example, technology facilitating human decision-making to process claims under a HIPAA-covered group health plan might fall outside of these regulations.
The proposed regulations would also define what it means to “substantially facilitate human decisionmaking.” There is a similar concept in some other AI regulation, such as Local Law 144 in New York City and the Colorado Artificial Intelligence Act (CAIA).
Under California’s proposed regulations, if the technology’s output is a key factor in a human’s decision-making, it will be considered to be substantially facilitating human decision-making. The proposed regulations provide the following example: “using automated decisionmaking technology to generate a score about a consumer that the human reviewer uses as a primary factor to make a significant decision about them.”
Note that the score need not be “the” primary factor, only “a” primary factor. Perhaps this will be clarified in the final rule. But one can read this language as similar to the “substantial factor” description when assessing “high-risk artificial intelligence systems” under the CAIA. However, under the New York City law, substantially assisting or replacing discretionary decision-making requires relying solely on the output, weighting the output more than any other factor, or using the output to overrule conclusions derived from other factors (including human decision-making). This is a small but potentially significant distinction affecting the potential application of AI regulation across jurisdictions that organizations will have to track.
ADMTs Include Profiling
The proposed regulations would make clear that ADMTs include profiling, defined as: “any form of automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s intelligence, ability, aptitude, performance at work, economic situation; health, including mental health; personal preferences, interests, reliability, predispositions, behavior, location, or movements.”
Over the last few years, many employers have deployed a range of devices and applications that may include “technologies” (under the proposed regulations—“software or programs, including those derived from machine learning, statistics, other data-processing techniques, or artificial intelligence”) that may constitute “profiling.”
These devices and applications help support employers’ efforts to source, recruit, monitor, track, and assess the performance of employees, applicants, and others. Examples include: 1) dashcams deployed throughout company fleets to promote safety, improve performance, and reduce costs, and 2) performance management platforms that, among other things, are used to evaluate employee productivity.
Technologies that Are Not ADMTs
Technologies that do not execute a decision, replace human decision-making, or substantially facilitate human decision-making would not be ADMTs, according to the proposed regulations. Such technologies would include: web hosting, domain registration, networking, caching, website-loading, data storage, firewalls, antivirus, anti-malware, spam- and robocall-filtering, spellchecking, calculators, databases, spreadsheets, and similar technologies.
Businesses would need to be careful in applying these exceptions. Using a spreadsheet to run regression analyses on top-performing managers to determine their common characteristics which are then used to make promotion decisions concerning more junior employees would be a use of an ADMT. That would not be the case if the spreadsheet were merely used to tabulate final scores on performance evaluations.
There will certainly be more to come concerning the regulation of AI, including under the CCPA. Organizations using these technologies will need to monitor these developments.
Joseph J. Lazzarotti is an attorney with Jackson Lewis in Tampa, Fla. © 2025 Jackson Lewis. All rights reserved. Reposted with permission.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.