少点错误 10月25日 04:40
关于签署超级智能声明的讨论与解读
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

该文章探讨了在超级智能发展背景下,研究人员签署一项声明的意义和方式。文章强调,即使对声明的某些方面持有不同意见,或对研究人员是否应公开表态有疑虑,签署声明仍有价值。文章提供了多种签署理由和补充说明的例子,包括对安全研究责任的看法、对声明详细程度的考量、对不同研究路径的认同,以及对超级智能风险的担忧。同时,文章也鼓励在签署后通过附加的个人声明来阐明具体立场,并指出声明的持续签署和公开讨论对推动相关议题具有重要意义。

📝 **声明签署的价值与灵活性**:文章指出,即使对声明的细节或签署的必要性存在疑问,签署超级智能声明仍具有重要意义。声明的目的是促进讨论和共识,允许个体通过附加的简短声明(600字符)来阐释个人立场,从而兼顾了集体行动与个体观点的表达,避免了非黑即白的极端化。

🤔 **多元化的签署理由与立场**:文章列举了多种签署声明的潜在原因,包括认为安全研究人员有责任公开表态、支持更详细的声明内容、认同研究路径的“阵营”划分,以及对超级智能潜在风险的担忧(如极化和联盟问题)。这表明签署者不必完全认同声明的每一个字,可以基于不同考量和对风险的评估来支持。即使是坚定的自由主义者或认为风险较低但仍值得关注的个体,也能找到签署的理由。

📢 **声明的持续影响与公开讨论**:文章强调,声明的签署数量持续增长,并且每次公开讨论都能引发新的对话。签署者可以通过在社交媒体或相关论坛分享声明和个人支持言论,进一步扩大声明的影响力。文章还引用了不同签署者的具体声明,展示了如何清晰、有说服力地表达对超级智能安全性的关切,即使有时对声明的措辞略有保留。

🚀 **对超级智能安全性的普遍担忧**:文章的核心在于对构建超级智能可能带来的灾难性后果的担忧。无论是担心人类无法及时建立安全措施,还是认为超级智能本身就具有不可控的风险,签署声明都代表了对这一议题的重视。即使有人认为不构建超级智能是更大的悲剧,他们也可以签署声明,并附带说明希望在安全的前提下进行,体现了对风险管理的审慎态度。

Published on October 24, 2025 8:30 PM GMT

TL;DR: you can still just sign this statement if you agree with it. It still matters, and you can clarify your position in a statement of support (600 characters) next to your name, and you can clarify your actual full position on LW and/or elsewhere. 

Regardless of X, you can still just sign. (for various values of X...)

X= whether you agree with me that safety researchers have a duty to take a public stance. If you agree with the statement and you are working at a lab where some of your coworkers signed, please consider that signing make it less personally costly for them to have signed.

X= whether you agreed with signing a more detailed longer statement that included many more things lots of folks agree on. Indeed, for many specific reasons I tried to address in the past post, seems like https://superintelligence-statement.org/ addresses those reasons. Also remember that you can also tell us why you signed if you are worried people will get the wrong idea (you have 600 characters). 

X= which "camp" you fall into. You can sign and go about trying to make superintelligence safely or you can sign and go about trying to get the world to not build unsafe superintelligence. 

X= whether you think there indeed are such camps on LW. You can be worried about polarization and sign. You can worry about conflationary alliances and sign.

X= whether you think I was silly circulating this statement in secret for months or whether you admire the galaxy-brained covert nature of the operation. 

X=whether you are glad that there already are 33K+ signatures or whether you are sad you missed the chance of signing pre-release. One important thing to realize is that people are still signing and if/when it reaches future milestones (eg 100K signatures and beyond), it will matter how many researchers signed[1]. And indeed by how we choose to act upon our agreement with this statement, we will decide how many thresholds will be hit and how soon.

Specifically, after signing, you can add a statement of support, and then you can post this somewhere people will see it (e.g. LW, twitter, ...)

Examples of statements of support:



Here are some statements of support, starting with mine:


Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely.

Here is Nate's: 


The race to superintelligence is suicidal. Progress shouldn't be subjected to a public veto, but technologists also shouldn't flirt with annihilation. Scientific consensus alone is not enough (any more than alchemist consensus in the year 1100 would be enough to guarantee a potion of immortality). The science of making a superintelligence beneficial is nowhere near mature. There's time pressure and reality may demand bold action from cognizant leaders (without public buy-in), but the public is right to object to the race and right to be wary of technologist consensus in this case.

Here is Tsvi's:
 

Humanity lacks the technical understanding required to built superhuman AI without the AI killing everyone. Efforts to build superhuman AI should be stopped through laws, international treaties, social norms, professional rules, and by providing alternatives ways to gain the supposed benefits. I hesitated to sign the statement because it sounds like it would be ok to build superhuman AI with some scientific consensus and public buy-in. I believe superhuman AI is too dangerous to be worth risking even in many of those scenarios. But I technically agree with the statement as worded

Here is Kaarel's[2]:

If we allow the pursuit of AGI to continue, the human era will end, humans and human institutions will probably be rendered insignificant and powerless, and plausibly simply extinct.

Even if you think that not building superintelligence would be the greatest tragedy, you can still just sign[3] and add a statement of support that goes a bit like:
 

I think not building superintelligence would be an immense tragedy, but I think the current race poses significant risks of a very bad future, and I'd prefer if we could just pause until we could build safe superintelligence.

Even if you are an uncompromising libertarian, you can still just sign and add a statement of support that goes a bit like this:
 

As a libertarian, I usually disagree with waiting for the public to approve, but letting anyone build tech that can take unilateral control of the future without at least some large fraction of libertarians on board would go against my libertarian values

Even if you think the risk is small but worth acting upon, you can just sign and add a statement of support:

Even a 10-25% chance of extinction would be too high, let's find a way to get better odds.

You can still just sign

You can still just sign https://superintelligence-statement.org/ if you agree with it. It still matters, and you can clarify how/why you agree in a statement of support (600 characters) next to your name, and you can clarify your actual full position on LW and/or elsewhere. 
If you disagree with the statement, you can say why in the comments. 

  1. ^

    Indeed, it matters every time someone opens this page. It matters every time someone uses this statement to begin a conversation. 

  2. ^

    I personally think Kaarel's wins the prize :D

  3. ^

    Assuming you broadly agree with the statement.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

超级智能 AI安全 声明 研究伦理 风险评估 Superintelligence AI Safety Statement Research Ethics Risk Assessment
相关文章