少点错误 09月24日 02:17
《若有AI问世,人人皆可亡》一书的社区支持声明
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文旨在建立LessWrong社区对《若有AI问世,人人皆可亡》一书普遍支持的共识。通过此声明,新老用户都能清晰了解社区对该书核心观点的认同度,并向外部展示LessWrong社区对AI风险的重视。正如CAIS在AI风险声明中所做的那样,本文意在记录和巩固社区内部对AI可能带来的生存风险的共识,强调其重要性,并鼓励更广泛的讨论和认识。

📚 社区共识的构建:本文的核心目的是在LessWrong社区内建立并巩固对《若有AI问世,人人皆可亡》(IABIED)一书的支持共识。这有助于让所有成员,特别是新用户,了解社区普遍认同该书提出的关键论点,即在未来一到两年内,超级智能AI的出现可能带来高达15%以上的、导致全人类短期内灭亡的风险。

🌍 外部信号与社区认同:该声明不仅服务于内部的知识同步,也旨在向外部传递一个明确的信号,即LessWrong社区——由Eliezer Yudkowsky创立的平台——认真对待AI风险。对于外部观察者而言,这将是衡量该书被社区重视程度的一个重要指标。

💡 与CAIS声明的类比及价值:文章将此举与2023年CAIS的AI风险声明进行类比,强调了这类“相互知识构建”声明的价值。尽管CAIS声明看似显而易见,但它成功记录了学术界和工业界广泛的共识。同样,LessWrong的IABIED支持声明,即使并非所有成员都完美同意每一个细节,也能有效记录和传达社区对AI生存风险的普遍关注和认同。

Published on September 23, 2025 5:51 PM GMT

Mutual-Knowledgeposting

The purpose of this post is to build mutual knowledge that many (most?) of us on LessWrong support If Anyone Builds It, Anyone Dies.

Inside of LW, not every user is a long-timer who's already seen consistent signals of support for these kinds of claims. A post like this could make the difference in strengthening vs. weakening the perception of how much everyone knows that everyone knows (...) that everyone supports the book.

Externally, people who wonder how seriously the book is being taken may check LessWrong and look for an indicator of how much support the book has from the community that Eliezer Yudkowsky originally founded.

The LessWrong frontpage, where high-voted posts are generally based on "whether users want to see more of a kind of content", wouldn't by default map a large amount of internal support for IABIED into a frontpage that signals support, and more like an active discussion of various aspects of the book, including interesting & valid nitpicks and disagreements.

Statement of Support

I support If Anyone Builds It, Anyone Dies.

That is:

Similarity to the CAIS Statement on AI Risk

The famous 2023 Center for AI Safety Statement on AI risk reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

I'm extremely happy that this statement exists and has so many prominent signatories. While many people considered it too obvious and trivial to need stating, many others who weren't following the situation closely (or are motivated to think otherwise) had assumed there wasn't this level of consensus on the content of the statement across academia and industry.

Notably, the statement wasn't a total consensus that everyone signed, or that everyone who signed agreed with passionately, yet it still documented a meaningfully widespread consensus, and was a hugely valuable exercise. I think LW might benefit from having a similar kind of mutual-knowledge-building Statement on this occasion.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LessWrong AI风险 If Anyone Builds It, Anyone Dies 社区共识 AI安全 超级智能 生存风险 AI Risk Community Consensus Superintelligence Existential Risk
相关文章