Topic
Rule or provision
What you need to know
Effective and compliance dates

ADMT

 

 

Art. 11, CCPA regulations

Businesses that use or rely on ADMT in making “significant decisions” about a consumer—e.g., affecting access to employment, housing, credit, health care, education, insurance, or essential goods—have to provide detailed pre-use notice of ADMT, offer an opt-out mechanism, and furnish additional individualized information about its ADMT use on request.

“ADMT” means technology that processes personal information and replaces or substantially replaces human decision-making. Although the definition omits references to “artificial intelligence” and “behavioral advertising,” it remains broad enough to capture machine learning models, rule-based scoring systems, and facial recognition.

January 1, 2027

 

 

Risk assessments

 

 

Art. 10, CCPA regulations

 

 

Businesses whose data processing could pose “significant risk to consumers’ privacy” must conduct written risk assessments before undertaking certain high-risk data processing activities (e.g., selling or sharing personal information, processing sensitive personal information, using ADMT for significant decisions, training ADMT to identify, infer traits, or analyze emotion or facial recognition). The assessment must identify the purposes, benefits, reasonably foreseeable risks, and proposed safeguards related to the processing, as well as to operational elements like collection process, retention periods, number of consumers impacted, and disclosures made to consumers.

Businesses have to submit all risk assessments to the CPPA by April 1, 2028 (for assessments conducted in 2026 and 2027) or April 1 of the following year (for assessments conducted in 2028 onward).

January 1, 2026

First filing due by April 1, 2028

 

 

Cyber audits

 

 

Art. 9, CCPA regulations

 

 

Businesses whose data processing could pose “significant risk to consumers’ security” are required to complete an independent cybersecurity audit annually. Audits should be based on evidence, not attestations, and conducted by a qualified, objective, and independent professional (who may be external or internal, but if internal, they can’t be responsible for the cyber program). The audit should test controls across areas such as MFA, encryption of personal information, retention and disposal of personal information, access management, vulnerability testing, incident response, and vendor oversight.

Businesses can leverage audits prepared for another purpose under existing frameworks (e.g., NIST CSF 2.0, SOC 2 Type II, ISO 27001) if scope and independence requirements of the final rule are met. 

A senior executive is required to certify the audit’s completion, with the certification to be filed with the CPPA by staggered deadlines based on the company’s annual revenue.

Certifications to the CPPA by: 

April 1, 2028, if the business makes more than $100M;

April 1, 2029, if the business makes between $50M and $100M; or

April 1, 2030, if the business makes less than $50M

Existing CCPA regulations

 

 

Art. 1-8, CCPA regulations

 

 

Updates to existing CCPA regulations include requirements that:

Any requests to opt out of data sale or sharing should take the same or fewer steps than the method to opt-in

Links to a company’s privacy policy have to appear on any webpage that collects personal information (not just the home page)

Businesses must clearly display whether they’ve honored a consumer’s request to opt out of sale or sharing when a consumer using an opt-out preference signal visits the website

Consumers should encounter notice of opt-out rights before or at the time data collection begins on connected devices

User interfaces must offer equal visual prominence for “yes” and “no” choices when asking for consent

Consumers may request from companies their personal information collected beyond the prior 12 months, if it exists

January 1, 2027

 

 

Employment decisions

 

 

Art. 1-10, CCRC regulations

 

 

Employers are prohibited from using automated-decision systems (ADS) that discriminate against applicants or employees based on protected categories defined under California’s Fair Employment and Housing Act (FEHA). Employers may also have to provide reasonable accommodations consistent with FEHA’s religious and disability protections. 

“ADS” means a computational process that makes a decision or facilitates human decision-making regarding an employment benefit. It may include AI, machine learning, algorithms, statistics, or other data processing techniques.

Employers must preserve ADS-related records for four years after creating the record or making the personnel decision at issue, whichever is later. 

Anti-bias testing, or the lack of it, is relevant to a claim of employment discrimination or an available defense.

October 1, 2025

 

 

Frontier AI models

 

 

Sec. 2-4, Senate Bill 53

 

 

Large frontier model developers must publish and keep current a framework on their website describing the company’s process for assessing whether the model could pose catastrophic risk and how it will identify and respond to “critical safety incidents,” among other things. They must also publish a transparency report whenever they release a new or substantially modified frontier model that summarizes their assessments of catastrophic risks, among other things. In addition, they have to notify the government of any critical safety incidents within 15 days of its discovery, or within 24 hours in the case of imminent risk of death or serious injury.

“Frontier model” means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. “Large frontier developer” means a frontier developer that had annual gross revenues exceeding $500M in the preceding calendar year.

Employees of frontier developers who report significant health and safety risks posed by frontier models are protected from retaliation. Developers are required to provide whistleblowers with anonymous reporting channels.

A newly formed consortium is charged with designing a public computing cluster, “CalCompute,” to support safe, ethical, equitable, and sustainable AI innovation.

January 1, 2026