Published on September 29, 2025 11:29 PM GMT
California Governor Gavin Newsom signed SB 53 on September 29. I think it’s a pretty great step, though I certainly hope future legislation builds on it.
I wrote up my understanding of what the text actually does; I welcome any corrections (and certainly further analysis for what this all means for AI safety)!
Very short summary
The law requires major AI companies to:
- Publish a frontier AI framework describing how the developer “approaches” assessing and mitigating catastrophic risks, defined as risks of death/serious injury to >50 people or >$1B in damage from a single incident related to CBRN uplift, autonomous crime, or loss of control, and keep the published version of the framework up to date.Publish model cards summarizing the assessments of catastrophic risks from the model, the role of third parties in those assessments, and how the frontier AI framework was implemented.Report to California's Office of Emergency Services 1) assessments of catastrophic risks from internal deployment and 2) critical safety incidents, defined as a materialization of catastrophic risks, unauthorized transfer/modification of the weights, loss-of-control resulting in death/bodily injury, and deceptive behavior that increases catastrophic risks.Allow whistleblowers to disclose information about the frontier developer's activities to the Attorney General, a federal authority, a manager, and certain other employees if they have reasonable cause to believe that those activities pose a "specific and substantial danger to the public health or safety resulting from a catastrophic risk," or that the frontier developer has violated SB 53.Not make “any materially false or misleading statement” about catastrophic risk from its frontier models, its management of catastrophic risk, or its compliance with its frontier AI framework.
Note that violations are punishable by fines up to $1M per violation, as enforced by the California Attorney General, and that the bill would not apply if Congress preempts state AI legislation.
Longer summary
What the bill requires of large frontier developers
“Large frontier developers” are defined as developers of models trained with >10^26 FLOP who also had >$500M in revenue the previous calendar year. They must do the following.
- Publish a "frontier AI framework" (no longer "safety and security protocol") that "describes how the large frontier developer approaches" the following:
- "incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework,""defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk,"
- These are defined as a "foreseeable and material risk" that a frontier model will "materially contribute to the death of, or serious injury to, more than 50 people" or more than $1B in damage from a single incident involving a frontier model providing "expert-level assistance" in creating a CBRN weapon; cyberattacks, murder, assault, extortion, or theft with "no meaningful human oversight"; or "evading the control of its frontier developer or user." Explicitly excluded, as of the new amendments, are risks from information that's publicly accessible "in a substantially similar form" without a frontier model and lawful activity of the federal government.This is a somewhat narrower scope than before, which included assistance with cyberattacks and "limited" rather than "no meaningful" human oversight
- These are defined as including materialization of a catastrophic risk; "unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model" or "loss of control of a frontier model" that (as of the new amendments) results in "death or bodily injury"; and a frontier model using "deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk."
- Yes, this suddenly makes Cal OES an important governing body for frontier AI.
- OES can designate a federal incident reporting standard as being sufficient to meet these requirements, and complying with that means you don't have to report to OES.
- The Attorney General will write an annual, anonymized/aggregated report about these disclosures to the governor and legislature.
Other notes about the bill
- All the published stuff can be redacted; they have to justify the redactions on grounds of trade secrets, cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law.Some, but not all, of the above requirements also apply to "frontier developers" who aren't "large frontier developers," i.e. they've trained 10^26 FLOP models but have <$500M/yr in revenue.There are civil penalties for noncompliance, including violating their own published framework, to be enforced by the state AG; these are fines of up to $1M per violation."This act shall not apply to the extent that it strictly conflicts with the terms of a contract between a federal government entity and a frontier developer."The bill could be blocked by going into effect if Congress preempts state AI regulation, which it continues to consider doing.
Discuss
