少点错误 09月24日 03:25
《若有AI,人类将亡》:LessWrong社区对AI生存风险的共识
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文旨在建立LessWrong社区对《若有AI,人类将亡》(If Anyone Builds It, Everyone Dies)一书的普遍支持的共识。作者认为,鉴于许多用户可能并非长期活跃者,此举有助于巩固社区内部对该书核心论点的认知。对于外部观察者而言,也能借此了解该书在ELiezer Yudkowsky创立的社区中的受重视程度。文章指出,LessWrong的前端展示方式更侧重于活跃讨论而非直接体现内部支持度。作者明确表达了对该书论点的支持,认为以当前技术构建的超级人工智能极有可能导致人类灭绝,并认为该书成为畅销书对世界更有利。此外,作者对MIRI(Machine Intelligence Research Institute)的工作表示认可,并将其与CAIS(Center for AI Safety)关于AI风险的声明进行类比,强调了这类声明在凝聚共识和提升公众认知方面的重要性。

📚 **《若有AI,人类将亡》的书籍共识建立**:本文的核心目的是在LessWrong社区内,明确并巩固大家对《若有AI,人类将亡》一书所阐述观点的普遍支持。作者认为,通过此举可以增强社区成员对该书论点“若AI被建造,人类将灭亡”的共同认知,尤其对于新近加入的成员。

💡 **AI生存风险的紧迫性与MIRI工作的价值**:作者高度认同该书的核心论点,即使用当前技术构建的超级人工智能极有可能导致人类灭绝,并认为此情形“并非不可能”且“极有可能”发生。作者还肯定了MIRI(Machine Intelligence Research Institute)在这一领域的研究工作,认为其被低估且被忽视,并相信该书的广泛传播对世界预期而言是更有益的。

🤝 **与CAIS声明的类比与社区共识的意义**:文章将此次行动与2023年CAIS(Center for AI Safety)发布的关于AI风险的声明相类比。作者引用了CAIS声明中“将AI灭绝风险作为全球优先事项”的观点,并强调了这类公开声明在记录和证明学术界与工业界广泛共识方面的重要作用,即使并非所有人都完全一致,也能有效传递关键信息。

Published on September 23, 2025 5:51 PM GMT

Mutual-Knowledgeposting

The purpose of this post is to build mutual knowledge that many (most?) of us on LessWrong support If Anyone Builds It, Anyone Dies.

Inside of LW, not every user is a long-timer who's already seen consistent signals of support for these kinds of claims. A post like this could make the difference in strengthening vs. weakening the perception of how much everyone knows that everyone knows (...) that everyone supports the book.

Externally, people who wonder how seriously the book is being taken may check LessWrong and look for an indicator of how much support the book has from the community that Eliezer Yudkowsky originally founded.

The LessWrong frontpage, where high-voted posts are generally based on "whether users want to see more of a kind of content", wouldn't by default map a large amount of internal support for IABIED into a frontpage that signals support, and more like an active discussion of various aspects of the book, including interesting & valid nitpicks and disagreements.

Statement of Support

I support If Anyone Builds It, Everyone Dies.

That is:

Similarity to the CAIS Statement on AI Risk

The famous 2023 Center for AI Safety Statement on AI risk reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

I'm extremely happy that this statement exists and has so many prominent signatories. While many people considered it too obvious and trivial to need stating, many others who weren't following the situation closely (or are motivated to think otherwise) had assumed there wasn't this level of consensus on the content of the statement across academia and industry.

Notably, the statement wasn't a total consensus that everyone signed, or that everyone who signed agreed with passionately, yet it still documented a meaningfully widespread consensus, and was a hugely valuable exercise. I think LW might benefit from having a similar kind of mutual-knowledge-building Statement on this occasion.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LessWrong AI Safety Existential Risk Artificial Superintelligence MIRI CAIS If Anyone Builds It, Everyone Dies AI Ethics AI Governance Community Consensus
相关文章