The New Jersey Attorney General’s office (NJAG) has added to nationwide efforts to regulate, or at least clarify, the application of existing law. In this case, it has clarified how the New Jersey Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), applies to artificial intelligence technologies. In short, the NJAG’s guidance states: “[T]he LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.”
In case you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination based on actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making technology [ADMT]) in employment, housing, places of public accommodation, credit, and contracting.
Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by New Jersey employers. According to the survey, 63% of New Jersey employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include: “any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process… such as generative AI, machine-learning models, traditional statistical tools, and decision trees.”
The NJAG guidance examined some ways that AI tools may contribute to discriminatory outcomes.
- Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses that can introduce bias into the ADMT.
- Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
- Deployment. The NJAG also observed that AI tools could be used to purposely discriminate or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.
The NJAG noted that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance clarified that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the U.S. Equal Employment Opportunity Commission in guidance the federal agency issued under Title VII of the Civil Rights Act of 1964, even if a third-party was responsible for developing the AI tool. Importantly, under New Jersey law, this includes disparate treatment/impact that may result from the design or usage of AI tools.
It is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their ADMTs before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well-thought-out governance strategy for managing this technology can go a long way toward minimizing legal risk, particularly as the law develops in this area.
Joseph J. Lazzarotti is an attorney with Jackson Lewis in Tampa, Fla. Jason C. Gavejian is an attorney with Jackson Lewis in Berkeley Heights, N.J. © 2025 Jackson Lewis. All rights reserved. Reposted with permission.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.