The California Civil Rights Council and the California Privacy Protection Agency have recently passed regulations that impose requirements on employers who use “automated-decision systems” or “automated decisionmaking technology,” respectively, in employment decisions or certain HR processes. On the legislative side, the California Legislature passed SB 7, which would impose additional obligations on employers who use these technologies; the bill is currently on the Governor’s desk. And the Governor has signed SB 53, which provides certain employee whistleblower rights with respect to AI safety. Below, we discuss some of the key requirements in the new regulations and legislation.

California Civil Rights Council – ADS Regulations

The California Civil Rights Council’s (CCRC) Employment Regulations Regarding Automated-Decision Systems took effect on October 1, 2025. The regulations, aimed at addressing potential discrimination when businesses use artificial intelligence tools to make personnel decisions, apply to employers with five or more employees in California. Notable provisions include:

Covered Technologies. The CCRC regulations define “Automated-Decision Systems” (ADS) as “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit.” An ADS may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, or other data processing techniques. An ADS could perform tasks such as:Screening resumes for particular terms or patterns;Directing job advertisements or other recruiting materials to targeted groups;Analyzing facial expressions, word choice, or voice in online interviews;Using computer-based assessments or tests (e.g., questions, puzzles, games) to make predictive assessments or measure skill, dexterity, reaction time, or other characteristics of an applicant or employee; or

Analyzing employee or applicant data acquired from third parties.

Employment Discrimination. The CCRC regulations clarify that it is unlawful under the California Fair Employment and Housing Act (FEHA) for an employer or covered entity to use an ADS that results in discrimination against an applicant or employee based on a protected class under FEHA (i.e., race, national origin, gender, age, religion, disability, etc.). An employer may be liable even if it did not intend to discriminate where the use of the ADS results in a disparate impact on a protected class.

Anti-bias Testing. The regulations provide that anti-bias testing or similar proactive efforts to avoid unlawful discrimination can be used in defense of discrimination claims. Conversely, the absence of anti-bias testing can be used as evidence in a claim against an employer.

Records Retention. An employer or covered entity must preserve ADS-related records, including selection criteria, relevant outputs and audit findings, for four years.

California Privacy Protection Agency – ADMT Regulations

The California Privacy Protection Agency (CPPA) recently approved, and the Office of Administrative Law finalized, regulations on the use of automated decisionmaking technologies (ADMT) in “significant decisions” about consumers, including employees. Businesses must be in compliance with the regulations by January 1, 2027, including with respect to any ADMT already in use prior to that date. Notable provisions relating to employees, job applicants, and independent contractors include:

Covered Technologies. The regulations define “Automated decisionmaking technology” or “ADMT” as “any technology that processes personal information and uses computation to replace human decisionmaking or substantially replace human decisionmaking.” For purposes of this definition, to “substantially replace human decisionmaking” means “a business uses the technology’s output to make a decision without human involvement.”

Significant Decision. A “significant decision” includes employment or independent contracting opportunities, defined as “hiring, allocation of work, compensation, promotion, demotion, suspension, or termination.”

Pre-use Notice. A business that uses ADMT to make a significant decision must provide consumers with a Pre-use Notice describing the use of ADMT and the consumer’s rights to opt-out of and to access ADMT.

Opt-out. A business must provide consumers with the ability to opt-out of the use of ADMT to make a significant decision concerning the consumer, subject to exceptions, including a human appeal option.

Access. A business that uses ADMT to make a significant decision must provide consumers with information about its use, such as the purpose of the ADMT and the logic involved, when responding to a consumer’s request to access ADMT, subject to exceptions.

Risk Assessments. Businesses must conduct a risk assessment before using ADMT to make a significant decision concerning employees and applicants or where the use otherwise presents a significant risk to employees (such as using automated processing to infer or extrapolate a consumer’s intelligence, ability, aptitude, performance at work, reliability, location, or movements, based on systematic observation of the employee or applicant). The risk assessment must contain specified content that is detailed and prescriptive. For any processing activity that the business initiated before the effective date of the regulations (January 1, 2026) and conducted after the effective date, the business must conduct a risk assessment by December 31, 2027 and submit certain required information about such risk assessments (but not the underlying risk assessments) to the CPPA by April 1, 2028.

Legislation

The California Legislature recently passed SB 7, the “No Robo Bosses Act,” which would require notice, human oversight, and corroboration when employers use automated decision systems for decisions that impact workers’ livelihoods. Governor Newsom has until October 12 to sign or veto the bill.

In the meantime, the Governor has signed SB 53, the Transparency in Frontier Artificial Intelligence Act, which takes effect on January 1, 2026. Among other things, the new law amends the California Labor Code to add whistleblower protections for employees of large AI companies, in connection with an employee concern that the company’s activities pose a specific and substantial danger to public health or safety. We’ll provide more information on this law in an upcoming blog post.

Next Steps

Employers should prepare for the relevant regulations and new laws by understanding the technologies its HR and other departments are using or plan to use which could trigger these legal requirements. Employers should develop policies and processes to ensure that they are satisfying relevant requirements when using AI in the employment context by, for example, providing relevant and timely notices to applicants and/or employees, conducting risk assessments and bias audits, and complying with recordkeeping requirements. For more best practices on using AI in the workplace, see our prior blog post.