少点错误 09月24日
AI未来:从中心化到去中心化、多元化与持久化的愿景
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI发展的另一种未来图景,区别于普遍的中心化超级智能设想。作者认为,AI的未来更可能是去中心化、多元化和持久化的,而非单一的乌托邦或灭绝式结局。虽然短期内AI能力进步可能因训练和架构限制而放缓,但到2030年代中期,近AGI系统将广泛扩散,类似于比特币的分布模式。届时,应对AI威胁的重点将从“抑制”转向“韧性”,社会需要适应持续且不可控的威胁。作者提出了两种可能的生存模式:通过海量冗余计算结构实现文明延续,或形成由各自AI系统管理的、半独立的多元化城邦。

💡 **AI未来并非单一的中心化叙事**:文章挑战了关于AI未来的普遍看法,即要么是单一的超级智能带来乌托邦,要么是毁灭性的失控。作者提出了一种更为现实的、去中心化、多元化且持久化的未来形态,这种形态虽然不一定是理想的,但值得规划。

⏳ **短期AI进步的放缓与扩散的可能性**:作者预测,由于训练和架构上的限制,AI能力在短期内(如2030年前)可能不会出现爆发式增长。这种放缓反而为“近AGI”系统、硬件和相关技术知识的广泛传播提供了时间,使得当技术突破到来时,它们难以被垄断或压制。

🛡️ **从“抑制”到“韧性”的应对策略转变**:面对AI的广泛扩散,传统的“抑制”策略(如禁令、条约)将失效。未来的重点将转向“韧性”,即如何在一个充斥着持续且不可控威胁的世界中生存和适应。这涉及到社会结构的根本性调整。

🌍 **两种可能的生存模式:冗余与多元**:文章提出了两种设想的未来社会形态。一种是依靠海量冗余的计算结构(如分布在太空中的计算单元)来实现文明的持久化,另一种则是形成由各自AI系统驱动的、半独立的多元化城邦,通过多样性来避免单一失败点。

⚙️ **AI未来世界的复杂性与不确定性**:最终的AI未来不会是单一的乌托邦或反乌托邦,而是一个由人类和AI社会构成的复杂拼贴。生存将依赖于冗余和适应能力,这是一个充满挑战但也可能带来某种程度自由的世界,其混乱和不可预测性更贴近历史发展的真实轨迹。

Published on September 23, 2025 10:24 PM GMT

When people talk about AI futures, the picture is usually centralized. Either a single aligned superintelligence replaces society with something utopian and post-scarcity, or an unaligned one destroys us, or maybe a malicious human actor uses a powerful system to cause world-ending harm.

Those futures might be possible. However there’s another shape of the future I keep coming back to, which I almost never see described. The adjectives I’d use are: decentralized, diverse, and durable. I don't think this future is necessarily good, but I do think it’s worth planning for.

Timelines and the Short-Term Slowdown

I don’t think we’re on extremely short timelines (e.g. AGI before 2030). I expect a small slowdown in capabilties progress.

Two reasons:

    Training limits. Current labs know how to throw resources at problems with clear, verifiable reward signals. This improves performance on those tasks, but many of the skills that would make systems truly economically transformative are difficult to reinforce this way.Architectural limits. Transformers with in-context learning are not enough for lifelong, agentive competence. I think something more like continual learning over long-context, human-produced data will be needed.

Regardless of the specifics, I do believe these problems can be solved. However I don't think they can be solved before the early or mid-2030s.

Proliferation as the Default

The slowdown gives time for “near-AGI” systems, hardware, and know-how to spread widely. So when the breakthroughs arrive, they don’t stay secret:

By the mid-to-late 2030s, AGI systems are proliferated much like Bitcoin: widely distributed, hard to suppress, & impossible to recall.

From Mitigation to Robustness

The early response to advanced AI will focus on mitigation: bans, treaties, corporate coordination, activist pressure. This echos how the world handled nuclear weapons: trying to contain them, limit their spread, and prevent use. For nukes, mitigation was viable because proliferation was slow and barriers to entry were high.

With AI, those conditions don’t hold. Once systems are everywhere, and once attacks (both human-directed and autonomous) become routine, the mitigation framing collapses.

With supression no longer being possible, the central question changes from “How do we stop this from happening?” to “How do we survive and adapt in a world where this happens every day?”

At this point our concerns shift from mitigation to robustness: what does a society look like when survival depends on enduring constant and uncontrollable threats?

Civilizational Adaptations

I don’t think there’s a clean picture of what the world will look like if proliferation really takes hold. It will be strange in ways that are hard to anticipate. The most likely outcome is probably not persistence at all, but extinction.

But if survival is possible, the worlds that follow may look very different from anything we’re used to. Here are two hypotheticals I find useful:

I don’t think of these conclusions as predictions persay, just sketches of how survival in such a world might look like. They’re examples of the kind of weird outcomes we might find ourselves in.

The Character of The World

There isn’t one dominant system. Instead there’s a patchwork of human and AI societies. Survival depends on redundancy and adaptation. It’s a world of constant treachery and defense, but also of diversity and (in some sense) liberty from centralized control. It is less utopia or dystopia, and moreso just a mess. However, it is a vision for the future that feels realistic in the chaotic way that history often seems to really unfold.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI未来 人工智能 去中心化 多元化 韧性 AI Futures Artificial Intelligence Decentralization Diversity Robustness
相关文章