少点错误 09月29日
AI公司在监管问题上的立场分析
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文深入探讨了美国前沿AI公司在监管问题上的复杂立场。文章指出,虽然普遍反对强硬监管,但各公司对弱安全监管的态度存在分歧,一些支持,一些反对。此外,多数公司倾向于联邦层面统一监管,即便缺乏完善的联邦框架。文章还评估了当前监管措施(如SB 53和RAISE Act)的有效性,并提及了公司在游说活动中倾向于支持能力建设和出口管制等非监管性措施。同时,文章也披露了公司在私下表达的更强烈的反监管态度。

🗄️ **普遍反对强硬监管,对弱安全监管态度不一**:大多数美国前沿AI公司均反对强硬的监管政策,但在面对影响AI安全性的较弱监管措施时,各公司立场出现分化,部分选择支持,部分则持反对态度。这种分歧反映了公司在如何平衡创新与安全方面的不同考量。

🏛️ **倾向联邦统一监管,支持预先规定**:除Anthropic外,多数AI公司支持联邦政府对AI相关法律进行统一规定,即联邦预先规定(preemption)州级法律,即便目前尚未建立起完善的联邦监管框架。这种立场旨在避免因各州法规不同而产生的合规复杂性。

💡 **偏好非监管性政策,如能力建设与出口管制**:在政策倡导方面,AI公司更倾向于推动政府在AI基础设施建设、政府AI应用能力提升以及出口管制等非直接监管领域投入资源。这表明他们更希望通过支持性措施来促进AI发展,而非施加限制。

⚖️ **对现有监管法案的评估与影响**:文章评估了SB 53和RAISE Act等现有监管措施,认为其可能过于宽松,难以显著改变AI发展态势。然而,专家对此持更乐观态度,认为这些法案仍具有积极意义。同时,文章也提及了欧盟AI法案的游说情况,以及超大型政治行动委员会(Super PACs)在AI政策倡导中的作用。

Published on September 29, 2025 3:00 PM GMT

Strong regulation is not on the table and all US frontier AI companies oppose it to varying degrees. Weak safety-relevant regulation is happening; some companies say they support and some say they oppose. (Some regulation not relevant to AI safety, often confused, is also happening in the states, and I assume companies oppose it but I don't really pay attention to this.) Companies besides Anthropic support federal preemption of state laws, even without an adequate federal framework. Companies advocate for non-regulation-y things like building government capacity and sometimes export controls.

My independent impression is that current regulatory efforts—SB 53 and the RAISE Act—are too weak to move the needle nontrivially. But the experts I trust are more optimistic, and so all-things-considered I think SB 53 and the RAISE Act would be substantially good (for reasons I don't really understand).

This post is based on my resource https://ailabwatch.org/resources/company-advocacy; see that page for more primary sources. This is all based on public information. My impression is that the companies are even more anti-regulation in private than in public.

US state bills

SB 1047 (2024)

California's SB 1047 (summary by supporters) was endorsed by xAI CEO Elon Musk. It was opposed by major AI companies:

SB 53 (2025)

SB 53 (summary) seems particularly light-touch. Anthropic supports it, and after it passed the legislature Meta said "While there are areas for improvement, SB 53 is a step in [the right] direction." OpenAI opposes it; its letter is wrong/deceitful.

RAISE Act (2025)

New York's Responsible AI Safety and Education Act is opposed by trade groups including Computer & Communications Industry Association (representing Amazon, Google, and Meta), AI Alliance (representing Meta), and Tech:NYC (representing Amazon, Google, Meta, and Microsoft). Anthropic policy lead Jack Clark is also critical.

US federal preemption

Preemption of state AI laws is supported by Meta and OpenAI, and Google endorses preemption alongside a light-touch federal framework. Preemption is also supported by trade groups including CCIA, TechNet, and INCOMPAS, as well as "Lobbyists acting on behalf of Amazon, Google, Microsoft and Meta." Preemption is opposed by Anthropic.

AI companies including OpenAI, Meta, and Google supported a proposed federal moratorium on state AI laws in August 2025.

EU AI Act

OpenAI, Google, Meta, Microsoft, and others (but not Anthropic) were caught lobbying against the relevant part of the EU AI Act. They seem to have avoided opposing it publicly. European AI companies Mistral AI and Aleph Alpha also convinced their home countries—France and Germany—to oppose the Act. See here for more links.

(After the Code of Practice was finalized, it was signed by OpenAI, Anthropic, Google, Microsoft, and Amazon. xAI signed just the safety and security chapter; Meta refused to sign.)

Super PACs

Three large pro-innovation super PACs were announced in August–September 2025: Leading the Future, supposedly with "more than $100 million" and involving OpenAI executives Greg Brockman and Chris Lehane, and Meta California and American Technology Excellence Project, each supposedly with "tens of millions" from Meta. Leading the Future will presumably be deceptive. Lehane and a16z have historically been deceptive in political advocacy. Leading the Future plans to emulate Fairshake and is led by one of the same people; Fairshake is low-integrity.

Policies companies support

When AI companies propose policy, they generally focus on investing in AI infrastructure, government AI adoption, lack of regulation, and sometimes export controls. Anthropic's recommendations are better for safety than other companies', despite not including real regulation;[2] for example, Anthropic sometimes recommends government eval capacity, government helping companies improve security, and transparency standards.

Misc

Anthropic & Clark

Jack Clark leads Anthropic's policy advocacy. He mostly says regulation is premature or should not be burdensome. Sometimes he emphasizes competition with China and says things like "I think the greatest risk is us [i.e. America] not using it [i.e. AI]." More generally, Anthropic basically opposes regulation that goes beyond transparency (but its advocacy is otherwise reasonable, as mentioned above).

OpenAI & Lehane

Chris Lehane leads OpenAI's policy advocacy. His political advocacy has been deceptive both recently and historically. He says:

Maybe the biggest risk here is actually missing out on the opportunity. There was a pretty significant vibe shift when people became more aware and educated on this technology and what it means.

 

Companies like ours have gotten pretty comfortable with how we're deploying this stuff in a responsible way, and understand the real challenge here is to make sure this opportunity is realized.

Elsewhere, he says his two big concerns are broadly distributing the benefits of AI and America beating China.

Other AI companies, including Amazon and Microsoft, also advocate against regulation (and Nvidia advocates against export controls, often deceptively). But I have less to say here.


Subscribe on Substack.

  1. ^

    Anthropic's letter to the governor said "In our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us." See also Jack Clark's tweet. I think many people have misinterpreted this as more supportive than it actually is. If you believe that a bill like this is only slightly better than nothing, the correct response may be not to enact it but rather to aim for a bill with less downside in the future; indeed, that's what the governor did.

  2. ^

    One blogpost notwithstanding.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI监管 人工智能政策 科技公司 联邦监管 AI安全 AI regulation AI policy Tech companies Federal regulation AI safety
相关文章