少点错误 10月05日
AI安全与长期主义:s风险与灭绝风险的相对可能性
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了关于高级AI系统可能带来的“更糟于死亡”的风险,即痛苦风险(s-risks),与未对齐AI导致的灭绝风险之间的相对可能性。文章引用了Kaj Sotala的“痛苦风险:入门”和长期风险中心的研究,并寻求更多近期或深入的资源。

Published on October 5, 2025 2:38 PM GMT

I’ve been reading about existential risks from advanced AI systems, including the possibility of “worse-than-death” scenarios sometimes called suffering risks (“s-risks”). These are outcomes where a misaligned AI could cause immense or astronomical amounts of suffering rather than simply extinguishing humanity.

My question: Do researchers working on AI safety and longtermism have any informed sense of the relative likelihood of s-risks compared to extinction risks from unaligned AI?

I’m aware that any numbers here would be speculative, and I’m not looking for precise forecasts, but for references, models, or qualitative arguments. For example:

Do most experts believe extinction is far more likely than long-lasting suffering scenarios?

Are there published attempts to put rough probabilities on these outcomes?

Are there any major disagreements in the field about this?

 

I’ve come across Kaj Sotala’s “Suffering risks: An introduction” and the work of the Center on Long-Term Risk, but I’d appreciate more recent or deeper resources.

Thanks in advance for any guidance.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 长期主义 痛苦风险 灭绝风险 AI风险
相关文章