少点错误 11月07日 06:48
LLM AI从质感到自我意识差距分析
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了基于LLM架构的AI从质感到自我意识的差距,分析了两者之间的相对差距,并讨论了相关概率和可能性。

Published on November 6, 2025 10:46 PM GMT

What is the consensus here on jumping from qualia (inner experience) to full self-awareness in LLM based AI? Meaning: If an AI running on something like LLM based architecture would gain qualia; inner experience of any kind, is the gap to self-awareness small? 

Is it perhaps 15 % for qualia, 10 % for full self-awareness? 

The alternative would be a bigger gap between qualia and self-awareness. Perhaps as big as, or bigger than the gap from non-sentience to qualia.

This question is only about how big the sentience jump would be, relatively speaking. I do not explicitly care about agency here. (The consensus there is ofc that agency is more likely than qualia. Those Ps are another discussion.) 

I would believe that frontier labs and most researchers (alignment and capability alike) would agree that unlike in evolved, organic life, the jump from qualia to self-awareness would be smaller. This since the LLM is already wired and trained for reasoning. The springing point is then that qualia itself is unlikely. But what the probabilities are for both are debatable. I am curious about the sentiment about the relative gap between them. 

I have no idea where LW stands on this, or where the broader public (those who think about this, I presume mostly academia) is at.

The premise here is that the labs would make all necessary changes and scaffolding to allow this to at least in theory be possible, say on purpose.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM AI 质感 自我意识 差距分析 概率
相关文章