Newsroom Anthropic 09月13日
加州AI监管新动向
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州通过SB 53法案加强AI监管,要求大型公司披露风险并建立安全框架。该法案基于‘信任但核实’原则,旨在提高AI安全透明度,同时为初创企业提供豁免。Anthropic等公司已遵循类似安全实践,SB 53将使这些标准合法化。法案关注最强大的AI系统,并强调联邦层面监管的必要性。

🔍SB 53法案要求大型AI公司建立并公开安全框架,详细说明如何管理、评估和减轻灾难性风险,这些风险可能造成大规模伤亡或重大经济损失。

📊法案强制公司发布透明度报告,总结灾难性风险评估和框架执行情况,以及关键安全事件报告,确保公众知情。

🛡️SB 53提供明确举报人保护,覆盖违反要求的行为及具体公共健康/安全威胁,并设定罚款机制以保障承诺兑现。

🤝该法案聚焦最强大的AI系统开发,同时为资源有限的初创公司提供豁免,避免不合理的监管负担。

🌐加州此举虽强调联邦监管的必要性,但通过透明化要求为AI安全设立基准,推动全球范围内更负责任的AI治理。

Anthropic is endorsing SB 53, the California bill that governs powerful AI systems built by frontier AI developers like Anthropic. We’ve long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California's previous attempt at AI regulation (SB 1047). While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.

Governor Newsom assembled the Joint California Policy Working Group—a group of academics and industry experts—to provide recommendations on AI governance. The working group endorsed an approach of 'trust but verify’, and Senator Scott Wiener’s SB 53 implements this principle through disclosure requirements rather than the prescriptive technical mandates that plagued last year's efforts.

What SB 53 achieves

SB 53 would require large companies developing the most powerful AI systems to:

    Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.Release public transparency reports summarizing their catastrophic risk assessments and the steps taken to fulfill their respective frameworks before deploying powerful new models.Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.Be publicly accountable for the commitments made in their frameworks or face monetary penalties.

These requirements would formalize practices that Anthropic and many other frontier AI companies already follow. At Anthropic, we publish our Responsible Scaling Policy, detailing how we evaluate and mitigate risks as our models become more capable. We release comprehensive system cards that document model capabilities and limitations. Other frontier labs (Google DeepMind, OpenAI, Microsoft) have adopted similar approaches while vigorously competing at the frontier. Now all covered models will be legally held to this standard. The bill also appropriately focuses on large companies developing the most powerful AI systems, while providing exemptions for startups and smaller companies that are less likely to develop powerful models and should not bear unnecessary regulatory burdens.

SB 53’s transparency requirements will have an important impact on frontier AI safety. Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete. But with SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional.

Looking ahead

SB 53 provides a strong regulatory foundation, but we can and should build upon this progress in the following areas and we look forward to working with policymakers to do so:

    The bill currently decides which AI systems to regulate based on how much computing power (FLOPS) was used to train them. The current threshold (10^26 FLOPS) is an acceptable starting point but there’s always a risk that some powerful models may not be covered.Similarly, developers should be required to provide greater detail about the tests, evaluations, and mitigations they undertake. When we share our safety research, document our red team testing, and explain our deployment decisions—as we have done alongside industry players via the Frontier Model Forum —it strengthens rather than weakens our work.Lastly, regulations need to evolve as AI technology advances. Regulators should have the ability to update rules as needed to keep up with new developments and maintain the right balance between safety and innovation.

We commend Senator Wiener and Governor Newsom for their leadership on responsible AI governance. The question isn't whether we need AI governance—it's whether we'll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former. We encourage California to pass it, and we look forward to working with policymakers in Washington and around the world to develop comprehensive approaches that protect public interests while maintaining America's AI leadership.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

SB 53 加州AI监管 灾难性风险 AI透明度 Anthropic 联邦监管
相关文章