少点错误 08月04日
Should we aim for flourishing over mere survival? The Better Futures series.
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一篇关于“更好未来”的系列文章探讨了我们应该将目光投向生存还是繁荣。文章认为,尽管生存是重要的,但未来的繁荣可能更具价值且更被忽视。通过一个两因素模型(生存几率与生存后的价值),作者指出,即使我们能有效规避生存风险,如果未能实现充分的繁荣,未来的价值仍将大打折扣。与生存风险相比,繁荣的提升空间可能更大,因此我们应更侧重于引导社会走向一个真正美好的未来,而非仅仅确保生存。文章还讨论了这种关注点转移的潜在挑战以及如何使其变得更加可行。

🌟 核心论点:未来价值的双重性在于生存几率和生存后的繁荣程度。文章提出,相较于仅确保生存,关注并提升未来的繁荣度可能更为重要,因为我们可能更接近繁荣的“天花板”,而忽略了大幅提升未来潜力的机会。

📊 价值评估模型:文章引入了一个简单的两因素模型,将未来价值视为“生存”几率与“(如果生存)未来的价值”的乘积。作者认为,即使生存风险得以规避,如果未来的实际价值(繁荣程度)远低于其潜力,整体价值仍会受限,凸显了繁荣的重要性。

⚖️ 繁荣被忽视的现状:与对生存的强烈关注不同,提升未来的繁荣度往往缺乏社会和个人的内在驱动力,即使可能存在重大的道德或社会进步空间,也容易被忽视。作者认为,有效利他主义社群也普遍更侧重于生存风险的规避。

🚀 可行性与努力方向:文章强调,尽管“更好未来”工作的可行性尚不明确,但就像AI安全和生物风险领域一样,通过集中努力,这些曾经看似棘手的领域也变得可行。作者呼吁在AI能力、决策协调、AI伦理和治理等多个方面探索提升未来福祉的途径。

Published on August 4, 2025 2:28 PM GMT

Today, Forethought and I are releasing an essay series called Better Futureshere.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing?

In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead. 

Why? 

Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.   

That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus.

The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too.

The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”. 

(“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.)

The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing. 

Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts extinction risk at about 4%; a survey of superforecasters put it at 1%. Toby Ord put total existential risk this century at 16%.

 

 

Chart from The Possible Worlds Tree.

In contrast, what’s the value of Flourishing? I.e. if near-term extinction is 0, what % of the value of a best feasible future should we expect to achieve? In the next two essays that follow, Fin Moorhouse and I argue that it’s low. 

And if we are farther from the ceiling on Flourishing, then the size of the problem of non-Flourishing is much larger than the size of the problem of the risk of not-Surviving.

To illustrate: suppose our Survival chance this century is 80%, but the value of the future conditional on survival is only 10%.

 

If so, then the problem of non-Flourishing is 36x greater in scale than the problem of not-Surviving. 

(If you have a very high “p(doom)” then this argument doesn’t go through, and the essay series will be less interesting to you.)

The importance of Flourishing can be hard to think clearly about, because the absolute value of the future could be so high while we achieve only a small fraction of what is possible. But it’s the fraction of value achieved that matters. Given how I define quantities of value, it’s just as important to move from a 50% to 60%-value future as it is to move from a 0% to 10%-value future.

We might even achieve a world that’s common-sensically utopian, while still missing out on almost all possible value. 

In medieval myth, there’s a conception of utopia called “Cockaigne” - a land of plenty, where everyone stays young, and you could eat as much food and have as much sex as you like. 

We in rich countries today live in societies that medieval peasants would probably regard as Cockaigne, now. But we’re very, very far from a perfect society. Similarly, what we might think of as utopia, today, could nonetheless barely scrape the surface of what is possible.  

All things considered, I think there’s quite a lot more at stake when it comes to Flourishing than when it comes to Surviving.

I think that Flourishing is likely more neglected, too. The basic reason is that the latent desire to Survive (in this sense) is much stronger than the latent desire to Flourish. Most people really don’t want to die, or to be disempowered in their lifetimes. So, for existential risk to be high, there has to be some truly major failure of rationality going on. 

For example, those in control of superintelligent AI (and their employees) would have to be deluded about the risk they are facing, or have unusual preferences such that they're willing to gamble with their lives in exchange for a bit more power. Alternatively, look at the United States’ aggregate willingness to pay to avoid a 0.1 percentage point chance of a catastrophe that killed everyone - it’s over $1 trillion. Warning shots could at least partially unleash that latent desire, unlocking enormous societal attention.

In contrast, how much latent desire is there to make sure that people in thousands of years’ time haven’t made some subtle but important moral mistake? Not much. Society could be clearly on track to make some major moral errors, and simply not care that it will do so.

Even among the effective altruist (and adjacent) community, most of the focus is on Surviving rather than Flourishing. AI safety and biorisk reduction have, thankfully, gotten a lot more attention and investment in the last few years; but as they do, their comparative neglectedness declines. 

The tractability of better futures work is much less clear; if the argument falls down, it falls down here. But I think we should at least try to find out how tractable the best interventions in this area are. A decade ago, work on AI safety and biorisk mitigation looked incredibly intractable. But concerted effort made the areas tractable. 

I think we’ll want to do the same on a host of other areas — including AI-enabled human coups; AI for better reasoning, decision-making and coordination; what character and personality we want advanced AI to have; what legal rights AIs should have; the governance of projects to build superintelligence; deep space governance, and more.

On a final note, here are a few warning labels for the series as a whole.

First, the essays tend to use moral realist language - e.g. talking about a “correct” ethics. But most of the arguments port over - you can just translate into whatever language you prefer, e.g. “what I would think about ethics given ideal reflection”.

Second, I’m only talking about one part of ethics - namely, what’s best for the long-term future, or what I sometimes call “cosmic ethics”. So, I don’t talk about some obvious reasons for wanting to prevent near-term catastrophes - like, not wanting yourself and all your loved ones to die. But I’m not saying that those aren’t important moral reasons. 

Third, thinking about making the future better can sometimes seem creepily Utopian. I think that’s a real worry - some of the Utopian movements of the 20th century were extraordinarily harmful. And I think it should make us particularly wary of proposals for better futures that are based on some narrow conception of an ideal future. Given how much moral progress we should hope to make, we should assume we have almost no idea what the best feasible futures would look like.

I’m instead in favour of what I’ve been calling viatopia, which is a state of the world where society can guide itself towards near-best outcomes, whatever they may be. Plausibly, viatopia is a state of society where existential risk is very low, where many different moral points of view can flourish, where many possible futures are still open to us, and where major decisions are made via thoughtful, reflective processes. 

From my point of view, the key priority in the world today is to get us closer to viatopia, not to some particular narrow end-state. I don’t discuss the concept of viatopia further in this series, but I hope to write more about it in the future.

  1. ^

    This series was far from a solo effort. Fin Moorhouse is a co-author on two of the essays, and Phil Trammell is a co-author on the Basic Case for Better Futures supplement. 

    And there was a lot of help from the wider Forethought team (Max Dalton, Rose Hadshar, Lizka Vaintrob, Tom Davidson, Amrit Sidhu-Brar), as well as a huge number of commentators.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

未来学 有效利他主义 长期主义 生存风险 未来繁荣
相关文章