July 25, 2024
California recently narrowed its focus on regulating employers’ use of automated decision tools, or Artificial Intelligence (AI), in two ways. In both cases, the goal is to prevent algorithmic discrimination.
First is AB 2930, a bill regulating how several sectors could use AI. Next is the California Civil Rights Council’s work to prevent employment discrimination through various amendments. In this instance, the Council proposed several changes to the Fair Employment and Housing Act (FEHA).
AB 2930
AB 2930 would regulate AI in various industries to help prevent “algorithmic discrimination.” According to the bill, such discrimination is “the condition in which an automated decision tool contributes to unlawful discrimination, including differential treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state or federal law.” Furthermore, AB 2930 requires developers and employers to conduct yearly impact assessments that analyze possible negative impacts of their AI tools.
After review, employers must create safeguards to address discovered risks of “algorithmic discrimination.” According to AB 2930, employers must inform individuals whether they will use an AI tool before or when making impactful decisions. This notice should describe the tool in plain language, provide a statement of purpose, and provide contact information.
The bill also requires employers to emphasize whether they will base their decision only on the automated decision tool. In such cases, they must accommodate applicants’ requests for a feasible alternative to AI. AB 2930 would also require employers to establish governance programs that address any risks of algorithmic discrimination. However, the bill exempts employers with fewer than 25 employees and systems impacting less than 999 people annually.
According to the bill, employers who violate AB 2930 risk civil actions against them by individuals or public attorneys. They also risk declaratory relief, compensatory damages, and attorney’s fees. Legislators will have until the end of August to pass the bill.
Passing AB 2930 means developers and employers must disclose their policies to the public. As such, they would publicize a summary of the automated decision tools used. They must also include how they handle the risks of algorithmic discrimination.
Amendments to the FEHA
The California Civil Rights Council proposed several amendments to the California Fair Employment and Housing Act (FEHA). These amendments would address rising concerns about algorithmic bias during the hiring process.
The new regulations would apply to organizations with at least five employees or people who provide services, including employment agencies or employers’ agents. Employers who use AI systems must ensure the system does not discriminate against applicants. For example, they cannot treat applicants differently based on characteristics protected under FEHA.
These amendments could hold the employer liable if their system discriminates against applicants. They also allow employers to justify their system’s actions. For example, they could attempt to prove the system’s criteria are job-related and essential for business. Employers could prove that less discriminatory options are not options this way.
Employers who use automated decision tools must also conduct anti-bias testing and keep the records for at least four years. Any AI system must follow the same regulations when considering criminal history as a manual system. The amendments would also require employers to retain relevant employment records.