Newsroom Anthropic 09月13日
美国发布AI行动计划,巩固AI领先地位
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国白宫发布了“赢得竞赛:美国的AI行动计划”,旨在巩固其在人工智能领域的领先优势。该计划重点关注加速AI基础设施建设、推动联邦政府的AI应用、加强安全测试和协调。计划内容呼应了Anthropic等公司此前提出的建议,强调了AI基础设施和能源供给的重要性,并提议通过优化联邦采购流程和信息公开请求来促进AI创新。同时,计划也关注AI带来的经济效益的普惠性,以及对潜在风险的防范,包括对AI可解释性、控制系统和对抗性鲁棒性的研究支持。Anthropic认为,除了技术测试,AI发展透明度标准和严格的出口管制同样是保持美国AI领导地位的关键。

🚀 **加速AI基础设施建设与联邦应用**:该计划将AI基础设施和联邦政府的AI应用置于优先地位,旨在解决AI发展所需的能源和数据中心建设问题,并优化联邦采购流程,移除阻碍AI系统部署的障碍。这有助于防止美国AI开发者因能源限制而将业务转移海外,从而保护敏感技术。

💡 **促进AI带来的普惠性发展**:计划支持国家AI研究资源(NAIRR)试点项目,确保全国学生和研究人员都能参与AI前沿研究。同时,它关注对失业工人的再培训和AI预学徒计划,旨在让所有美国人都能从AI发展中受益,并致力于通过经济指数和项目来确保AI经济效益的广泛共享。

🛡️ **强化安全AI开发与风险防范**:计划强调防御强大AI模型的滥用,并为未来AI风险做准备,特别是支持AI可解释性、AI控制系统和对抗性鲁棒性的研究。此外,它认可美国国家标准与技术研究院(NIST)人工智能标准与创新中心(CAISI)在评估前沿模型和应对国家安全风险方面的工作,并建议继续投资于此,以应对AI在生物武器开发等方面的潜在风险。

⚖️ **建立AI发展透明度与国家标准**:Anthropic认为,除了测试,AI开发透明度要求,如公开安全测试和能力评估报告,对于负责任的AI发展至关重要。虽然计划提及了CAISI的工作,但Anthropic希望看到更多关于前沿模型透明度的内容,并倡导建立一个统一的国家标准,而非零散的法律法规,以确保AI开发的安全性和可信度。

📈 **维持强有力的出口管制以保障AI领先地位**:计划认识到限制外国对手获取先进AI计算能力的重要性。Anthropic对此表示强烈支持,并对近期关于Nvidia H20芯片出口中国的政策调整表示担忧。认为H20芯片提供的独特计算能力将帮助中国企业弥补AI芯片的短缺,从而削弱美国的AI优势。因此,建议维持对H20芯片的出口管制,以确保美国在AI领域的领先地位。

Today, the White House released "Winning the Race: America's AI Action Plan"—a comprehensive strategy to maintain America's advantage in AI development. We are encouraged by the plan’s focus on accelerating AI infrastructure and federal adoption, as well as strengthening safety testing and security coordination. Many of the plan’s recommendations reflect Anthropic’s response to the Office of Science and Technology Policy’s (OSTP) prior request for information. While the plan positions America for AI advancement, we believe strict export controls and AI development transparency standards remain crucial next steps for securing American AI leadership.

Accelerating AI infrastructure and adoption

The Action Plan prioritizes AI infrastructure and adoption, consistent with Anthropic’s submission to OSTP in March.

We applaud the Administration's commitment to streamlining data center and energy permitting to address AI’s power needs. As we stated in our OSTP submission and at the Pennsylvania Energy and Innovation Summit, without adequate domestic energy capacity, American AI developers may be forced to relocate operations overseas, potentially exposing sensitive technology to foreign adversaries. Our recently published “Build AI in America” report details the steps the Administration can take to accelerate the buildout of our nation’s AI infrastructure, and we look forward to working with the Administration on measures to expand domestic energy capacity.

The Plan’s recommendations to increase the federal government's adoption of AI also includes proposals that are closely aligned with Anthropic’s policy priorities and recommendations to the White House. These include:

    Tasking the Office of Management and Budget (OMB) to address resource constraints, procurement limitations, and programmatic obstacles to federal AI adoption.Launching a Request for Information (RFI) to identify federal regulations that impede AI innovation, with OMB coordinating reform efforts.Updating federal procurement standards to remove barriers that prevent agencies from deploying AI systems.Promoting AI adoption across defense and national security applications through public-private collaboration.

Democratizing AI’s benefits

We are aligned with the Action Plan’s focus on ensuring broad participation in and benefit from AI’s continued development and deployment.

The Action Plan’s continuation of the National AI Research Resource (NAIRR) pilot ensures that students and researchers across the country can participate in and contribute to the advancement of the AI frontier. We have long supported the NAIRR and are proud of our partnership with the pilot program. Further, the Action Plan’s emphasis on rapid retraining programs for displaced workers and pre-apprenticeship AI programs recognizes the errors of prior technological transitions and demonstrates a commitment to delivering AI’s benefits to all Americans.

Complementing these proposals are our efforts to understand how AI is transforming, and how it will transform, our economy. The Economic Index and the Economic Futures Program aim to provide researchers and policymakers with the data and tools they need to ensure AI’s economic benefits are broadly shared and risks are appropriately managed.

Promoting secure AI development

Powerful AI systems are going to be developed in the coming years. The plan’s emphasis on defending against the misuse of powerful AI models and preparing for future AI related risks is appropriate and excellent. In particular, we commend the administration’s prioritization of supporting research into AI interpretability, AI control systems, and adversarial robustness. These are important lines of research that must be supported to help us deal with powerful AI systems.

We're glad the Action Plan affirms the National Institute of Standards and Technology's Center for AI Standards and Innovation’s (CAISI) important work to evaluate frontier models for national security issues and we look forward to continuing our close partnership with them. We encourage the Administration to continue to invest in CAISI. As we noted in our submission, advanced AI systems are demonstrating concerning improvements in capabilities relevant to biological weapons development. CAISI has played a leading role in developing testing and evaluation capabilities to address these risks. We encourage focusing these efforts on the most unique and acute national security risks that AI systems may pose.

The need for a national standard

Beyond testing, we believe basic AI development transparency requirements, such as public reporting on safety testing and capability assessments, are essential for responsible AI development. Leading AI model developers should be held to basic and publicly-verifiable standards of assessing and managing the catastrophic risks posed by their systems. Our proposed framework for frontier model transparency focuses on these risks. We would have liked to see the report do more on this topic.

Leading labs, including Anthropic, OpenAI, and Google DeepMind, have already implemented voluntary safety frameworks, which demonstrates that responsible development and innovation can coexist. In fact, with the launch of Claude Opus 4, we proactively activated ASL-3 protections to prevent misuse for chemical, biological, radiological, and nuclear (CBRN) weapons development. This precautionary step shows that far from slowing innovation, robust safety protections help us build better, more reliable systems.

We share the Administration’s concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws. Ideally, these transparency requirements would come from the government by way of a single national standard. However, in line with our stated belief that a ten-year moratorium on state AI laws is too blunt an instrument, we continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems, if the federal government fails to act.

Maintaining strong export controls

The Action Plan states that “denying our foreign adversaries access to [Advanced AI compute] . . . is a matter of both geostrategic competition and national security.” We strongly agree. That is why we are concerned with the Administration’s recent reversal on export of the Nvidia H20 chips to China.

AI development has been defined by scaling laws: the intelligence and capability of a system is defined by the scale of its compute, energy, and data inputs during training. While these scaling laws continue to hold, the newest and most capable reasoning models have demonstrated that AI capability scales with the amount of compute made available to a system working on a given task, or “inference.” The amount of compute made available during inference is limited by a chip’s memory bandwidth. While the H20’s raw computing power is exceeded by chips made by Huawei, as Commerce Secretary Lutnick and Under Secretary Kessler recently testified, Huawei continues to struggle with production volume and no domestically-produced Chinese chip matches the H20’s memory bandwidth.

As a result, the H20 provides unique and critical computing capabilities that would otherwise be unavailable to Chinese firms, and will compensate for China’s otherwise major shortage of AI chips. To allow export of the H20 to China would squander an opportunity to extend American AI dominance just as a new phase of competition is starting. Moreover, exports of U.S. AI chips will not divert the Chinese Communist Party from its quest for self-reliance in the AI stack.

To that end, we strongly encourage the Administration to maintain controls on the H20 chip. These controls are consistent with the export controls recommended by the Action Plan and are essential to securing and growing America’s AI lead.

Looking ahead

The alignment between many of our recommendations and the AI Action Plan demonstrates a shared understanding of AI's transformative potential and the urgent actions needed to sustain American leadership.

We look forward to working with the Administration to implement these initiatives while ensuring appropriate attention to catastrophic risks and maintaining strong export controls. Together, we can ensure that powerful AI systems are developed safely in America, by American companies, reflecting American values and interests.

For more details on our policy recommendations, see our full submission to OSTP, and our ongoing work on responsible AI development and our recent report on increasing domestic energy capacity.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI Action Plan 人工智能 美国AI AI Policy Anthropic AI Development Export Controls AI Security AI Infrastructure AI Leadership
相关文章