Fortune | FORTUNE 10月01日
加州出台AI监管新法,关注透明度与风险披露
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加利福尼亚州近日签署了一项关于人工智能监管的新法律,要求大型AI公司公开披露其对先进AI模型潜在风险的缓解计划。该法案还建立了关键安全事件报告机制,并为AI公司员工提供举报人保护。同时,加州将着手建立CalCompute,一个公共计算集群,用于安全、合乎道德且可持续的AI研究。此举旨在平衡公共安全与行业创新,并可能为全球AI监管树立标准,尽管部分观点认为该法案对初创企业构成挑战,并可能加剧大型科技公司的优势。

⚖️ **AI风险披露与透明度提升**:新法律核心在于要求大型AI公司,特别是那些总部位于加州的AI巨头,公开披露其如何计划应对先进AI模型可能带来的灾难性风险。这旨在通过强制性的信息公开,让公众和监管机构更了解AI技术的潜在影响,并促使企业在开发过程中更加审慎。

🛡️ **安全事件报告与员工保护**:该法案设立了报告关键安全事件的机制,确保当AI系统出现重大问题时能够及时上报和处理。此外,它还为AI公司的员工提供了举报人保护,鼓励内部人员在发现安全隐患或不当行为时,能够无后顾之忧地提出问题,从而提升AI行业的整体安全性。

🌐 **CalCompute公共计算集群与行业影响**:加州将着手建立CalCompute,一个政府主导的公共计算集群,旨在为安全、合乎道德和可持续的AI研究与创新提供支持。此举可能有助于降低AI研究的门槛,促进更广泛的参与。同时,作为AI公司聚集地,加州的立法可能会对全球AI监管产生示范效应,尽管一些初创企业担心其合规成本过高,可能加剧大型科技公司的市场主导地位。

California has taken a significant step toward regulating artificial intelligence with Governor Gavin Newsom signing a new state law that will require major AI companies, many of which are headquartered in the state, to publicly disclose how they plan to mitigate the potentially catastrophic risks posed by advanced AI models.

The law also creates mechanisms for reporting critical safety incidents, extends whistleblower protections to AI company employees, and initiates the development of CalCompute, a government consortium tasked with creating a public computing cluster for safe, ethical, and sustainable AI research and innovation. By compelling companies, including OpenAI, Meta, Google DeepMind, and Anthropic, to follow these new rules at home, California may effectively set the standard for AI oversight.

Newsom framed the law as a balance between safeguarding the public and encouraging innovation. In a statement, he wrote: “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”

The legislation, authored by State Sen. Scott Wiener, follows a failed attempt to pass a similar AI law last year. Wiener said that the new law, which was known by the shorthand SB 53 (for Senate Bill 53), focuses on transparency rather than liability, a departure from his prior SB 1047 bill, which Newsom vetoed last year.

“SB 53’s passage marks a notable win for California and the AI industry as a whole,” said Sunny Gandhi, VP of Political Affairs at Encode AI, a co-sponsor SB 53. “By establishing transparency and accountability measures for large-scale developers, SB 53 ensures that startups and innovators aren’t saddled with disproportionate burdens, while the most powerful models face appropriate oversight. This balanced approach sets the stage for a competitive, safe, and globally respected AI ecosystem.”

Industry reactions to the new legislation have been divided. Jack Clark, co-founder of AI company Anthropic, which backed SB 53, wrote on X: “We applaud [the California Governor] for signing [Scott Wiener’s] SB 53, establishing transparency requirements for frontier AI companies that will help us all have better data about these systems and the companies building them. Anthropic is proud to have supported this bill.” He emphasized that while federal standards are still important to prevent a patchwork of state rules, California has created a framework that balances public safety with ongoing innovation.

OpenAI, which did not endorse the bill, told news outlets it was “pleased to see that California has created a critical path toward harmonization with the federal government—the most effective approach to AI safety,” adding that if implemented correctly, the law would enable cooperation between federal and state governments on AI deployment. Meta spokesperson Christopher Sgro similarly told media the company “supports balanced AI regulation,” calling SB 53 “a positive step in that direction,” and said Meta looks forward to working with lawmakers to protect consumers while fostering innovation.

Despite being a state-level law, the California legislation will have a global reach, since 32 of the world’s top 50 AI companies are based in the state. The bill requires AI firms to report incidents to California’s Office of Emergency Services and protects whistleblowers, allowing engineers and other employees to raise safety concerns without risking their careers. SB 53 also includes civil penalties for noncompliance, enforceable by the state attorney general, though AI policy experts like Miles Brundage note these penalties are relatively weak compared, even compared to those enforced by the EU’s AI Act.

Brundage, who was formerly the head of policy research at OpenAI, said in an X post that while SB 53 represented “a step forward,” there was a need for “actual transparency” in reporting, stronger minimum risk thresholds, and technically robust third-party evaluations.

Collin McCune, head of government affairs at Andreessen Horowitz, also warned the law “risks squeezing out startups, slowing innovation, and entrenching the biggest players,” and said it sets a dangerous precedent for state-by-state regulation that could create “a patchwork of 50 compliance regimes that startups don’t have the resources to navigate.” Several AI companies that lobbied against the bill also made similar arguments.

California is aiming to promote transparency and accountability in the AI sector with the requirement for public disclosures and incident reporting; however, critics like McCune argue that the law could make compliance challenging for smaller firms and entrench Big Tech’s AI dominance.

Thomas Woodside, a co-founder at Secure AI Project, a co-sponsor of the law, called the concerns around startups “overblown.”

“This bill is only applying to companies that are training AI models with a huge amount of compute that costs hundreds of millions of dollars, something that a tiny startups can’t do,” he told Fortune. “Reporting very serious things that go wrong, and whistleblower protections, is a very basic level of transparency; and the obligations don’t even apply to companies that have less than $500 million in annual revenue.”

Fortune Global Forum

returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business.

Apply for an invitation.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

加州AI监管 AI安全 人工智能 透明度 风险披露 CalCompute SB 53 AI Policy California AI Regulation AI Safety Artificial Intelligence Transparency Risk Disclosure
相关文章