AI 2 People 09月30日
两党议员提议成立联邦项目评估AI风险
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国参议员Josh Hawley和Richard Blumenthal再次聚焦人工智能,提出一项旨在建立联邦项目以评估先进人工智能系统风险的法案。该法案提议在能源部设立一个项目,收集关于潜在AI灾难的数据,如失控系统、安全漏洞或被敌对势力武器化等。法案还要求开发者在部署模型前提交审查。这标志着政府正努力对发展迅速的技术进行监管,与此前加州通过的消费者安全透明法案类似。尽管白宫对过度监管可能影响美国AI竞争力的担忧,但两党在AI风险上的合作凸显了其被视为一把双刃剑。此举旨在防止重蹈社交媒体覆辙,在灾难发生后才认识到风险。

🇺🇸 参议员Josh Hawley和Richard Blumenthal共同提出《人工智能风险评估法案》,旨在建立一个联邦项目来评估先进人工智能系统的潜在风险。该法案计划在能源部设立一个专门的评估机制,以收集和分析关于AI可能引发的灾难性后果的数据,这包括但不限于AI系统失控、大规模安全漏洞以及被恶意势力武器化等场景。

📜 法案的核心要求之一是,AI模型的开发者在部署其系统之前,必须按照规定提交模型进行审查。这一前瞻性的要求旨在改变科技行业“快速行动,打破常规”的传统模式,强调在技术大规模应用前进行风险评估和安全检查,以规避潜在的负面影响。

🤝 尽管在政治立场上常有分歧,Hawley和Blumenthal在此事上的合作表明,人工智能的风险和监管已成为跨越党派界限的议题。这种合作并非首次,此前他们曾联手提出保护内容创作者免受AI生成作品侵权的提案,显示了他们对AI潜在双重性的共同关注,既能带来创造力,也可能引发混乱。

⚖️ 尽管白宫对过度监管可能削弱美国在AI领域的国际竞争力(尤其是在与中国的竞争中)表示担忧,但此法案的提出反映了政策制定者正努力跟上技术发展的步伐。作者认为,这种“常识性监督”并非扼杀创新,而是为了确保科技进步不会带来无法承受的后果,避免重蹈社交媒体发展初期风险被忽视的覆辙。

Senators Josh Hawley and Richard Blumenthal are once again stepping into the AI spotlight, this time with a bill that aims to create a federal program to evaluate the risks of advanced artificial intelligence systems.

According to Axios, the Artificial Intelligence Risk Evaluation Act would set up a program at the Department of Energy to gather data on potential AI disasters—think rogue systems, security breaches, or weaponization by adversaries.

It sounds almost like science fiction, but the concerns are all too real.

And here’s the kicker: developers would be required to submit their models for review before deployment.

That’s a sharp contrast to the usual “move fast and break things” Silicon Valley mantra. It reminds me of how, just a few months back, California passed a landmark AI law focusing on consumer safety and transparency.

Both efforts point to a broader movement—government finally tightening the reins on a tech that’s been sprinting ahead of regulation.

What really struck me, though, is how bipartisan this push has become. You’d think Hawley and Blumenthal would agree on little, yet here they are singing the same tune about the risks of AI.

And it’s not their first rodeo; earlier this year, they teamed up on a proposal to shield content creators from AI-generated replicas of their work.

Clearly, they see AI as a double-edged sword—capable of creativity and chaos in equal measure.

But here’s where it gets messy. The White House has signaled that over-regulation might dampen innovation and put the U.S. behind in its AI race with China.

That tug-of-war—safety versus speed—echoes what I heard at the recent Snapdragon Summit, where chipmakers flaunted AI-driven laptops and hyped “agentic AI” like it was the next industrial revolution.

The tech world is charging ahead, and policymakers are scrambling to catch up.

Here’s my two cents: it’s refreshing to see lawmakers at least trying to wrestle with these questions before catastrophe strikes.

Sure, bills like this won’t fix everything, and they might even slow down a few flashy rollouts.

But can we really afford another “social media moment” where we realize the risks only after the damage is done?

I’d argue that common-sense oversight, like this proposal suggests, is less about stifling progress and more about ensuring that progress doesn’t come back to bite us.

So, what’s next? If this bill gains traction, we could see the Department of Energy become the unexpected gatekeeper of AI safety.

And if it fizzles, well, Silicon Valley gets a longer leash. Either way, one thing is clear: AI has officially moved from tech blogs to the Senate floor, and it’s not going back.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI风险 立法 监管 参议院 能源部 Artificial Intelligence AI Risk Legislation Regulation Senate Department of Energy
相关文章