October 1, 2024

The California state legislature recently passed the AI Safety Act, which would significantly regulate the developers of Artificial Intelligence systems (AI). The AI Safety Act concentrates on AI developers and the potential harm large-scale AI systems could cause. As such, the Act also establishes requirements to help prevent critical harm. Due to opposition, it remains uncertain whether the governor will sign it.

Implementing a Full Shutdown

The AI Safety Act would require developers of covered AI systems to ensure they can shut down the system completely. The Act also recognized that such a shutdown could cause significant disruptions to critical infrastructure. As such, the developers must implement a written and separate safety and security protocol. The protocol would cover when a shutdown would occur, along with other important concerns.

Safety Expectations With AI

Another requirement concerns the release of the covered AI system. According to the AI Safety Act, developers must perform safety testing to “assess whether the covered model is reasonably capable of causing or materially enabling a critical harm.” Developers must also retain records of this testing for a required period and reasonable safeguards to keep covered AI systems from”causing or materially enabling a critical harm.”

The AI Safety Act requires developers to report any safety incidents affecting a covered model to the state’s Attorney General. They must report within 72 hours of gaining sufficient knowledge to believe a safety incident occurred.

Have an Annual Third-Party Audit Performed

Developers must also have a third party conduct an annual independent audit on any covered AI system. According to the AI Safety Act, the auditors must review whether the system complies with the Act. Furthermore, the Act would hold the developers accountable for any harm or inconsistencies discovered. As such, the California Attorney General would have the power to bring a civil action against a developer for the damage their covered AI system caused or request injunctive relief to prevent possible harm.

Protect Whistleblowers

The AI Safety Act also includes protection for whistleblowers. As such, developers cannot prevent employees from disclosing information about a covered model to the Attorney General or Labor Commissioner. The Act also bans developers from retaliating against employees for disclosing this information.

With government pressure heating up, the importance of compliance is at an all-time high. This makes it all the more important to comply with employment regulations, including those covering background checks. The best way to remain compliant when conducting employment screenings is to partner with a trusted background check company.

Disclaimer:
Information provided here is for educational and informational purposes only and should not constitute as legal advice. We recommend you contact your own legal counsel for any questions regarding your specific practices and compliance with applicable laws.

Source