JDP

Artificial Intelligence Compliance Updates for November: California & Beyond

Artificial Intelligence being used on a computer

Here are 5 artificial intelligence compliance updates!

California Leads Regulatory Frontier with New Privacy and Artificial Intelligence Laws for 2026

California has enacted a broad package of new privacy and artificial intelligence laws that strengthen data protections and position the state as a national leader in tech regulation. The legislation includes shorter data-breach notification timelines, expanded transparency obligations for data brokers, and new requirements governing automated decision-making and AI development, with most provisions taking effect January 1, 2026.

California AI Employment Regulations Take Effect

California has passed a sweeping package of new privacy and artificial intelligence laws that dramatically expand regulatory requirements for businesses beginning in 2026 and 2027.

The new laws tighten data-breach timelines, increase data-broker transparency, mandate social-media “delete” options and universal opt-out signals, and impose major obligations on AI developers—including limits on algorithmic pricing, liability protections, chatbot disclosures, frontier-model safety reporting, and age-assurance requirements. 

Together, these measures significantly raise compliance expectations for any company processing consumer data or deploying AI tools in California and are expected to influence similar legislation nationwide.

A Guide to Algorithmic (AI) Bias in Hiring: Managing Legal & Reputational Risks 

As artificial intelligence increasingly automates hiring—from resume screening to candidate ranking—it can inadvertently perpetuate historical bias, creating both legal and reputational risks for employers. The article outlines how algorithms trained on legacy data may reflect gender, race, university or ZIP-code bias, exposing firms to claims of disparate impact, deceptive practices and regulatory scrutiny (e.g., from the Equal Employment Opportunity Commission). To mitigate these risks, employers are advised to perform pre-deployment bias testing, maintain human oversight of algorithmic decisions, conduct annual audits, and hold vendors accountable under transparent governance frameworks. 

The Proliferation of State Laws Regulating AI Use in Employment Decisions 

Advancements in algorithmic decision‐making and hiring tools are heightening legal and reputational risks for employers, as greater scrutiny emerges around bias, transparency, and fairness in automated systems. It recommends that organizations implement bias testing, human oversight of AI systems, vendor accountability, and clear documentation of algorithmic design and deployment to prepare for enforcement in this area.

Trump Calls for Federal Standard to Block State AI Laws

To learn more about AI Compliance, visit JDP.com today.

Exit mobile version