少点错误 10月02日
AI安全与长期主义研究中的s风险与灭绝风险比较
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了高级AI系统带来的存在风险,包括“比死亡更糟”的情景,即“痛苦风险”(s-risks)。文章对比了AI安全与长期主义研究者对s风险与未对齐AI灭绝风险的相对可能性看法,并分析了相关的研究模型和观点。

Published on October 2, 2025 10:02 AM GMT

I’ve been reading about existential risks from advanced AI systems, including the possibility of “worse-than-death” scenarios sometimes called suffering risks (“s-risks”). These are outcomes where a misaligned AI could cause immense or astronomical amounts of suffering rather than simply extinguishing humanity.

My question: Do researchers working on AI safety and longtermism have any informed sense of the relative likelihood of s-risks compared to extinction risks from unaligned AI?

I’m aware that any numbers here would be speculative, and I’m not looking for precise forecasts, but for references, models, or qualitative arguments. For example:

Do most experts believe extinction is far more likely than long-lasting suffering scenarios?

Are there published attempts to put rough probabilities on these outcomes?

Are there any major disagreements in the field about this?

 

I’ve come across Kaj Sotala’s “Suffering risks: An introduction” and the work of the Center on Long-Term Risk, but I’d appreciate more recent or deeper resources.

Thanks in advance for any guidance.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 长期主义 s风险 灭绝风险 AI风险比较
相关文章