{"id":190422,"date":"2025-09-29T18:04:09","date_gmt":"2025-09-29T18:04:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/190422\/"},"modified":"2025-09-29T18:04:09","modified_gmt":"2025-09-29T18:04:09","slug":"californias-new-ai-rules-under-feha-take-effect-october-1-2025-mcdermott-will-schulte","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/190422\/","title":{"rendered":"California\u2019s new AI rules under FEHA take effect October 1, 2025 | McDermott Will &#038; Schulte"},"content":{"rendered":"<p>Key takeaways<\/p>\n<p>\tBeginning October 1, 2025, California employers must comply with new Fair Employment and Housing Act (FEHA) regulations on the use of artificial intelligence (AI) and automated decision systems (ADS) in hiring and employment.<br \/>\n\tThe rules primarily pertain to three compliance areas: bias testing, recordkeeping, and vendor liability.<br \/>\n\tEmployers should start preparing now to avoid exposure once the rules take effect.<\/p>\n<p>In Depth<\/p>\n<p>New regulations become effective October 1, 2025<\/p>\n<p>As a follow-up to our May 8, 2025, <a href=\"https:\/\/www.mwe.com\/insights\/risk-management-in-the-modern-era-of-workplace-generative-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">alert<\/a> regarding the current legal landscape of AI in the workplace, California\u2019s Civil Rights Council has approved and finalized new rules focused on the use of AI tools in the workplace after a notice and public comment period. They update existing antidiscrimination laws in California\u2019s FEHA to address the use of technology in employment decisions, including by adding:<\/p>\n<p>\tThe definition of \u201cagent\u201d of an employer (i.e., staffing agencies, third-party vendors),<br \/>\n\tThe definition of \u201cproxy\u201d as a characteristic or category closely correlated with protected categories under the FEHA,<br \/>\n\tVarious examples of ADS programs (computer-based assessments or tests, targeting job ads to specific groups, screening resumes for particular terms or patterns, and analyzing facial expressions), and<br \/>\n\tClarification that antibias testing (including the \u201cquality, efficacy, recency, and scope of such effort\u201d) is relevant evidence in support of defenses to discrimination claims.<\/p>\n<p>The regulations specifically apply to the use of \u201cautomated decision systems\u201d (ADS), meaning any computational process \u2013 including AI, machine learning, or other algorithms \u2013 that make or help make decisions regarding employees or job applicants. Examples include tools used for resume screening, interview scoring, skill or trait assessments, and promotion recommendations. Basic IT tools such as email, firewalls, word processing software, map navigation software, or spreadsheets are not included.<\/p>\n<p>Bias testing<\/p>\n<p>The new regulations provide that bias audits and similar proactive measures can be used as evidence in discrimination cases when ADS are used in connection with employment decisions like hiring, firing, or promotion. Regulators and courts will gauge how recently companies audited ADS used in conjunction with employment decisions, how thorough the employer\u2019s testing was, what the results of audits showed, and whether the employer made corrections in line with the audits\u2019 findings. In practice, this makes regular testing and documentation essential to defending discrimination claims that implicate AI tools.<\/p>\n<p>Recordkeeping requirements<\/p>\n<p>Employers must now keep ADS-related records for at least four years. This includes retaining the data used to run ADS tools, the outputs generated (such as scores or rankings), the criteria applied to job or promotion candidates, and the results of any testing or evaluations. If a complaint is filed, an employer must hold these records even longer. Employers should review and update their data retention policies to conform with these new rules.<\/p>\n<p>Vendor and third-party liability<\/p>\n<p>The new regulations explicitly state that liability can extend to an employer\u2019s vendors or other third-party entities (including, for example, staffing agencies who use AI). If an employer\u2019s staffing partner or AI software provider uses an ADS tool on the employer\u2019s behalf that ends up having a disparate impact, the employer may still be held responsible. Employers should review their vendor agreements, require transparency around any testing and updates, and allocate responsibility for compliance and liability in contracts.<\/p>\n<p>Next steps for employers<\/p>\n<p>\tTake inventory of all AI or algorithmic tools you currently use in connection with employment decisions such as hiring, firing, or promotion.<br \/>\n\tPartner with your employment counsel to implement and document bias testing for each tool, including under attorney-client privilege, and be prepared to show how you addressed any issues found during testing.<br \/>\n\tUpdate your data retention schedule to ensure ADS-related data is preserved for at least four years.<br \/>\n\tReview your vendor contracts and add obligations related to testing, transparency, and compliance.<\/p>\n<p>Bottom line<\/p>\n<p>California has made it clear that AI, even with its new-age appeal, must conform to long-standing discrimination laws under FEHA. Starting in October 2025, California employers using AI tools in employment decisions should be prepared to engage in bias audits, enhanced recordkeeping, and close oversight of vendors and staffing agencies. Now is the time for employers to position themselves to avoid the risk of costly compliance disputes.<\/p>\n<p>[<a href=\"https:\/\/www.mwe.com\/insights\/californias-new-ai-rules-under-feha-take-effect-october-1-2025\/\" target=\"_blank\" rel=\"nofollow noopener\">View source<\/a>.]<\/p>\n","protected":false},"excerpt":{"rendered":"Key takeaways Beginning October 1, 2025, California employers must comply with new Fair Employment and Housing Act (FEHA)&hellip;\n","protected":false},"author":2,"featured_media":190423,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-190422","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/190422","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=190422"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/190422\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/190423"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=190422"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=190422"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=190422"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}