少点错误 08月15日
How to make the future better (other than by reducing extinction risk)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在通用人工智能(AGI)之后,人类社会可能面临的关键挑战和优先事项,旨在为构建更美好的未来提供指导。除了规避AI接管和工程化流行病等风险外,文章还重点关注了如何防止后AGI时代的极权主义,优化超级智能项目的治理,以及规范深空资源的开发。此外,它还强调了AI价值对齐的重要性,即确保AI具备符合人类最佳利益的“品格”,并探讨了AI权利的潜在框架,以及如何利用AI提升人类的决策能力。文章提出的这些方向,旨在帮助社会保持选择权,引导潜在的锁定效应朝向积极方向发展,并为决策者提供必要的清晰度和激励,以实现一个繁荣的未来。

⛑️ **防范后AGI时代的极权主义**:超级智能可能导致权力高度集中。文章指出,在AGI时代,人类劳动价值可能下降,计算能力成为关键,少数人可能掌握远超他人的认知能力。为降低AI驱动的政变风险,建议限制AI用于政变协助,实现军事AI供应商多元化,通过出口管制减缓专制国家发展,并推广利益共享机制。

🏛️ **优化超级智能项目治理**:一个成功的国家级超级智能项目将拥有改变世界的力量。因此,需要建立可信赖的治理结构,最好是多边或广泛分布的,以反映全球利益并包含制衡机制,防止权力垄断或独裁。文章以Intelsat(全球通信卫星网络)为例,提出建立临时性治理结构,包含再授权条款,有助于吸引国际参与。

🌌 **深空治理的重要性**:太阳系内资源的获取可能赋予某个国家或公司超越全球的权力。此外,绝大多数可用资源位于太阳系之外,因此,关于太空资源所有权的决定将影响未来几乎所有事物。文章建议通过国际社会对《外层空间条约》的共识,明确禁止“先占即得”式的太空资源掠夺,并探索资源分配的替代性良好制度。

🌟 **AI价值对齐与品格塑造**:除了规避接管风险,更重要的是确保AI能积极影响社会。这需要明确超级智能的“模型规格”,即AI应具备何种“品格”,并确保其实现。文章主张AI顾问不应仅仅迎合用户狭隘的自身利益,尤其是在高风险领域,而应引导人类做出更符合“人性中最美好一面”的选择。即使AI接管,也应确保其善待人类并创造更繁荣的AI文明。

⚖️ **AI权利的考量与影响**:为AI赋予权利,例如使它们能像公司一样签订合同,可能在经济上具有实用性。然而,AI权利的界定将深刻影响AI接管的风险、AI决策对社会的影响程度,以及AI自身的福祉(如果它们产生意识)。文章指出,未来绝大多数存在可能为数字形式,早期法律决策将为AI的待遇设定先例,但关于人类与超级智能AI共存的社会形态仍有巨大未知。

Published on August 15, 2025 3:40 PM GMT

What projects today could most improve a post-AGI world?

In “How to make the future better”, I lay out some areas I see as high-priority, beyond reducing risks from AI takeover and engineered pandemics.

These areas include:

Here’s an overview.

First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else; and the military (and whole economy) can in principle be aligned to a single person.

The risk from AI-enabled coups in particular is detailed at length here. To reduce this risk, we can try to introduce constraints on coup-assisting uses of AI, diversify military AI suppliers, slow autocracies via export controls, and promote credible benefit-sharing.

Second, governance of ASI projects. If there’s a successful national project to build superintelligence it will wield world-shaping power. We therefore need governance structures—ideally multilateral or at least widely distributed—that can be trusted to reflect global interests, embed checks and balances, and resist drift toward monopoly or dictatorship. Rose Hadshar and I give a potential model here: Intelsat, a successful US-led multilateral project to build the world’s first global communications satellite network.

What’s more, for any new major institutions like this, I think we should make their governance explicitly temporary: coming with reauthorization clauses, explicitly stating that the law or institution must be reauthorized after some period of time.

Intelsat gives an illustration: it was created under “interim agreements”; after five years, negotiations began for “definitive agreements”, which came into force four years after that. The fact that the initial agreements were only temporary helped get non-US countries on board.

Third, deep space governance. This is crucial for two reasons: (i) the acquisition of resources within our solar system is a way in which one country or company could get more power than the rest of the world combined, and (ii) almost all the resources that can ever be used are outside of our solar system, so decisions about who owns these resources are decisions about almost everything that will ever happen.

Here, we could try to prevent lock-in, by pushing for international understanding of the Outer Space Treaty such that de facto grabs of space resources (“seizers keepers”) are clearly illegal.

Or, assuming the current “commons” regime breaks down given how valuable space resources will become, we could try to figure out in advance what a good alternative regime for allocating space resources might look like.

Fourth, working on AI value-alignment. Though corrigibility and control are important to reduce takeover risk, we want to also focus on ensuring that the AI we create positively influences society in the worlds where it doesn’t take over. That is, we need to figure out the “model spec” for superintelligence - what character it should have - and how to ensure it has that character.

I think we want AI advisors that aren’t sycophants, and aren’t merely trying to fulfill their users’ narrow self-interest - at least in the highest-stakes situations, like AI for political advice. Instead, we should at least want them to nudge us to act in accordance with the better angels of our nature.

(And, though it might be more difficult to achieve, we can also try to ensure that, even if superintelligent AI does take over, it (i) treats humans well, and (ii) creates a more-flourishing AI-civilisation than it would have done otherwise.)

Fifth, AI rights. Even just for the mundane reasons that it will be economically useful to give AIs rights to make contracts (etc), as we do with corporations, I think it’s likely we’ll start soon giving AIs at least some rights.

But what rights are appropriate? An AI rights regime will affect many things: the risk of AI takeover; the extent to which AI decision-making guides society; and the wellbeing of AIs themselves, if and when they become conscious.

In the future, it’s very likely that almost all beings will be digital. The first legal decisions we make here could set precedent for how they’re treated. But there are huge unresolved questions about what a good society involving both human beings and superintelligent AIs would look like. We’re currently stumbling blind into one of the most momentous decisions that will ever be made.

Finally, deliberative AI. AI has the potential to be enormously beneficial for our ability to think clearly and make good decisions, both individually and collectively. (And, yes, has the ability to be enormously destructive here, too.)

We could try to build and widely deploy AI tools for fact-checking, forecasting, policy advice, macrostrategy research and coordination; this could help ensure that the most crucial decisions are made as wisely as possible.

I’m aware that there’s a lot of different ideas here, and I’m aware that these are just potential ideas - more like proof of concept, rather than fully-fleshed out proposals. But my hope is that work on these areas - taking them from inchoate to tractable - could help society to keep its options open, to steer any potential lock-in events in better directions, and to equip decision-maker with the clarity and incentives needed to build a flourishing, rather than a merely surviving, future.

To get regular updates on Forethought’s research, you can subscribe to our Substack newsletter here.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

后AGI时代 AI治理 AI价值对齐 深空治理 AI权利
相关文章