Amid the controversy over AI-generated “actress” Tilly Norwood, the State of California yesterday adopted one of the first laws to regulate the development, oversight and safety of AI.
The language in SB53 repeatedly targets potential “catastrophic risks” posed by AI models that may “materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident.”
It includes the Transparency in Frontier Artificial Intelligence Act, which sets forth mechanisms to ensure “greater transparency, whistleblower protections and incident reporting systems” among AI developers.
Specifically, SB53 requires that an AI developer:
1.) Incorporate “national standards, international standards, and industry-consensus best practices into its frontier AI framework”;
2.) Report “a summary of any assessment of catastrophic risk” that results from the use of its models;
3.) Report any “critical safety incident” and imposes “a civil penalty for noncompliance” of up to $1M;
4.) Publish transparency reports that detail how it is incorporating the above-mentioned standards into its models; mitigating and preventing potential catastrophic risks; identifying and responding to critical safety incidents;
5. Not interfere with with or retaliate against whisteblowers.
Governor Gavin Newsom signed SB 53 into law on Monday, saying that Californians can now “have greater confidence that Frontier AI models are responsibly developed and deployed.”
Newsom last year vetoed SB 1047, which was seen as more strict.