February 13, 2025

The New Jersey Attorney General’s Office (NJAG) recently provided guidance concerning the New Jersey Law Against Discrimination (LAD). According to the NJAG, it specifically addresses how the law against discrimination applies to artificial intelligence (AI) technologies.

The guidance is for those who use AI technologies as a loophole to discriminate against various people. As such, the NJAG stressed that discrimination through AI is a form of discrimination and, therefore, illegal. The LAD typically regulates housing providers, places of public accommodation, and employers, among others. The LAD bans employers and others covered by this law from discriminating against people based on the following:

  • “Race or color;
  • Religion or creed;
  • National origin, nationality, or ancestry;
  • Sex, pregnancy, or breastfeeding;
  • Sexual orientation;
  • Gender identity or expression;
  • Disability;
  • Marital status or domestic partnership/civil union status;
  • Liability for military service;
  • In housing: Familial status and source of lawful income used for rental or mortgage payments;
  • In employment: Age, atypical hereditary cellular or blood trait, genetic information, the refusal to submit to a genetic test or make available to an employer the results of a genetic test.”

This protection applies even when companies use automated decision-making tools. The NJAG emphasized the importance of this protection because New Jersey employers regularly use AI technology when obtaining job applicants and making hiring decisions. According to the guidance, AI tools include “any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process.”

The guidance discusses some areas in which automated decision-making tools may cause discrimination. Examples included the development, training, and deployment of AI technology. When designing the tools, the developer could incorporate coding that leads to discrimination, regardless of their intention. Developers train AI tools before releasing them for public use. However, it is possible for the datasets used during training to lead to discrimination when using the tools.

According to the guidance, AI tools have seen many deployment issues that can lead to discrimination. For example, anyone can use the deployed tools to discriminate intentionally. Otherwise, they could generally use the tools for purposes other than the intended uses.

The guidance does not add any additional requirements for those covered by the LAD. It also does not add new obligations or rights. However, it does explain the possibility for covered entities to violate the LAD unintentionally. This explanation also covered tools developed by a third party.

Disclaimer:
Information provided here is for educational and informational purposes only and should not constitute as legal advice. We recommend you contact your own legal counsel for any questions regarding your specific practices and compliance with applicable laws.

Source