On September 29, 2025, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA) (SB 53) into law.

The Act imposes obligations on “frontier models”, defined to mean AI models that are:

Designed for generality of output;
Adaptable to a wide range of distinctive tasks; and
Trained on a broad data set and using a quantity of computing power greater than 10^26 integer or floating-point operations.

This computing power threshold includes computing for the original training run and any subsequent fine-tuning, reinforcement learning, or other material modifications applied to the model.

Here are five things you should know about this novel law that regulates the most cutting-edge technologies.

1. The law is designed to evolve.

Today, the TFAIA is designed to focus primarily on well-resourced AI companies at the frontier of AI development. However, rapid advances in computing technology and continued significant investment in AI mean that more models could cross the training compute threshold over time, potentially expanding the breadth of developers within scope.

Recognizing this potential for scope creep, the TFAIA requires the California Department of Technology to annually assess such developments and recommend updates to the definitions of frontier model, frontier developer, and large frontier developer (currently, a frontier developer with more than $500 million prior-year revenue) and submit recommendations to the legislature to help inform updates so the scope of the TFAIA evolves with industry, academic, and governmental consensus.

This means that thresholds should keep pace to maintain only the most advanced AI models in scope. However, there is no guarantee the thresholds will keep pace with the rapid progression of AI training techniques. In short, a developer that is outside TFAIA’s scope today may not remain so indefinitely.

2. All frontier developers must publish transparent disclosures.

Before deploying a new frontier model (or substantially modifying an existing one), frontier developers must publish a transparency report describing key details about the model, including the types of outputs it produces, its intended uses, and any generally applicable restrictions on its use.

The TFAIA permits redactions where necessary to protect trade secrets, cybersecurity, public safety or national security, and where required to comply with law. The developer must describe and justify any redactions in the public version and retain the unredacted information for five years.

3. The largest developers must publish and follow rigorous frontier AI frameworks.

Large frontier developers must create, implement, and conspicuously publish documented technical and organizational protocols to manage catastrophic risks. Catastrophic risks are defined as foreseeable and material risks that a frontier model will materially contribute to the death or serious injury of more than 50 people or to more than $1 billion in loss or damage arising from a single incident involving the frontier model evading the control of its developer or user, engaging in certain cyberattacks or serious crimes, or providing expert-level assistance in the creation or release of chemical, biological, radiological or nuclear weapons.

These frontier AI frameworks must meet a list of enumerated disclosure requirements, including how the developer incorporates national, international and industry standards; how it defines and assesses thresholds for catastrophic risk; and how it identifies and responds to critical safety incidents. Large frontier developers must also include additional framework-related disclosures in their transparency reports.

4. The Act establishes protections and reporting requirements for critical safety incidents.

The Act takes steps to mitigate risk from critical safety incidents, which include death or bodily injury resulting from misuse of model weights in a frontier model or loss of control of a frontier model, harm resulting from a catastrophic risk or rogue AI materially increasing catastrophic risk.

Frontier developers must report critical safety incidents to California’s Office of Emergency Services (Cal OES) within 15 days of discovery. Cal OES will establish the mechanism by which frontier developers or any member of the public may report such incidents. If a critical safety incident poses an imminent risk of death or serious physical injury, developers must notify an appropriate authority, such as law enforcement or public safety, within 24 hours of discovery.

The Act also includes whistleblower protections, including a prohibition on retaliation, a requirement to establish anonymous internal reporting processes and certain legal rights for whistleblowers bringing actions for violations of these protections.

5. The Act creates CalCompute, a consortium to advance safe AI.

The Act establishes a consortium to develop a framework for a public cloud computing cluster known as “CalCompute.” CalCompute is expected to be established within the University of California and is tasked with advancing safe, ethical, equitable, and sustainable AI development and deployment by fostering public-interest research and equitable innovation through expanded access to computational resources. The consortium’s initial framework is due by Jan. 1, 2027.

Looking Forward

For now, only the largest developers of the most advanced AI models are likely subject to the TFAIA. These companies should promptly begin preparing to meet the Act’s extensive disclosure and governance requirements ahead of the rapidly approaching effective date of Jan. 1, 2026, when violations may be subject to penalties from the California Attorney General of up to $1 million.

Other AI model developers should still monitor developments closely and remain aware of how the market is reacting.

[View source.]