少点错误 10月30日 00:28
AI安全周报:新基准衡量AI自动化能力,万名专家呼吁超级智能暂停
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本期AI安全周报关注两项重要进展。首先,AI安全中心(CAIS)与Scale AI联合发布了远程劳动力指数(RLI),这是首个旨在衡量AI自动化实际工作项目能力的基准测试,涵盖了建筑、产品设计、游戏开发等多个领域。RLI显示,尽管AI在特定任务上表现出色,但目前尚不能自动化大多数经济价值高的工作,最先进的AI代理仅能自动化2.5%的项目,但改进正在稳步进行。其次,未来生命研究所(FLI)发起了一封公开信,呼吁在科学界就其安全性与可控性达成广泛共识并获得公众支持前,禁止开发超级智能。该信件已获得超过5万名签署者,包括顶尖AI科学家、诺贝尔奖得主、宗教领袖及政界人士,标志着AI安全领域共识的显著增长。

🌐 **远程劳动力指数(RLI)发布,量化AI自动化能力:** AI安全中心(CAIS)与Scale AI联合推出的RLI是首个评估AI自动化实际工作项目能力的基准。该指数通过收集真实经济中的各类工作项目,旨在为政策制定、AI研究和商业决策提供依据。RLI显示,尽管AI在现有狭窄基准测试中表现优异,但其自动化实际经济价值高的工作能力仍有限,目前最先进的AI代理仅能完成2.5%的项目,但其自动化能力正稳步提升。

🚀 **5万余人联署公开信,呼吁暂停超级智能开发:** 未来生命研究所(FLI)发起的公开信汇集了超过5万名签署者,包括顶尖AI科学家、诺贝尔奖得主、宗教领袖和各界知名人士,共同呼吁在对超级智能的安全性与可控性达成广泛科学共识并获得公众支持前,禁止其开发。这一联署规模创下AI安全领域公开信的历史之最,反映了AI风险日益受到重视。

📊 **公众对超级智能风险的担忧加剧:** 与公开信同步发布的民意调查显示,约三分之二的美国成年人认为,在被证明安全可控之前,不应创造超级智能。这一数据表明,社会对于AI发展潜在风险的认知正在深化,并对AI安全问题产生了广泛的关注和担忧。

Published on October 29, 2025 4:05 PM GMT

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.


CAIS and Scale AI release Remote Labor Index

The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance.

RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.

Examples of RLI Projects

Current AI agents fully automate very few work projects, but are improving. AIs score highly on existing narrow benchmarks, but RLI shows that there is a gap in the existing measurements: AIs cannot currently automate most economically valuable work, with the most capable AI agent only automating 2.5% of work projects on RLI, however there are signs of steady improvement over time.

Current AI agents complete at most 2.5% of projects in RLI, but are improving steadily.

Bipartisan Coalition for Superintelligence Moratorium

The Future of Life Institute (FLI) introduced an open letter with over 50,000 signatories endorsing the following text:

We call for a prohibition on the development of superintelligence, not lifted before there is

    broad scientific consensus that it will be done safely and controllably, andstrong public buy-in.

The signatories form the broadest group to sign an open letter about AI safety in history. Among the signatories are five Nobel laureates, the two most cited scientists of all time, religious leaders, and major figures in public and political life from both the left and the right.

This statement builds on previous open letters about AI risks, such as the open letter from CAIS in 2023 acknowledging AI extinction risks, as well as the previous open letter from FLI calling for an AI training pause. While the CAIS letter was intended to establish a consensus about risks from AI and the first FLI letter was calling for a specific policy on a clear time frame, the broad coalition behind the new FLI letter and its associated polling creates a powerful consensus opinion about the risks of AI while also calling for action.

In the past, critics of AI safety have dismissed the concept of superintelligence and AI risks due to lack of mainline scientific and public support. The breadth of people who have signed this open letter demonstrates that opinions are changing on the matter. This is confirmed by polling released concurrently to the open letter, showing that approximately 2 in 3 US adults believe that superintelligence shouldn’t be created, at least until it is proven safe and controllable.

A broad range of news outlets have covered the statement. Dean Ball and others push back on the statement on X, pointing out the lack of specific details on how to implement a moratorium and the difficulty of doing so. Scott Alexander and others respond defending the value of statements of consensus as a tool for motivating developing specific details of AI safety policy.

In Other News

Government

Industry

Civil Society

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 AI自动化 超级智能 Remote Labor Index Superintelligence Moratorium AI Risk 人工智能伦理
相关文章