少点错误 10月03日 12:43
AI时代:经济繁荣与人类角色的新思考
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文提出了一种名为“Autofac Era”的未来经济模型,设想在未来五年内若未能实现通用人工智能(AGI),经济将高度自动化,但人类仍将作为消费者驱动经济活动。文章分析了这一模式下,AI如何自动化绝大多数脑力和体力劳动,带来经济的爆炸式增长,以及由此产生的巨大财富盈余。在此背景下,人类的角色将转变为经济活动的驱动者和监督者,少量职业得以保留,而大部分人将依赖普遍基本收入(UBI)生活。文章还探讨了这一时代可能面临的挑战,如收入不平等加剧,并提供了个人准备策略,以及预测了该时代可能的终结方式,特别是AGI的到来。

🤖 **AI驱动的经济自动化与增长**:在未能实现AGI的假设下,AI将在短期内自动化绝大多数脑力和体力劳动,这将极大地提高生产力,使经济产出实现快速翻倍。这种效率的提升将带来巨大的财富盈余,使得人类社会整体变得更加富裕,但同时也可能加剧收入不平等。

👤 **人类角色的转变与新职业**:随着AI接管大部分工作,人类的角色将从生产者转变为经济活动的驱动者和消费者。少数职业将得以保留,主要集中在需要人类动机、决策、创造力或情感连接的领域,如企业高管、投资者、部分服务型职业(如护理、教育、艺术)以及维持社会秩序的职业(如法律、政治)。

💰 **普遍基本收入(UBI)与消费驱动的经济**:为了维持经济运转和避免社会动荡,国家将通过公共投资和UBI来支撑一个以人类消费为核心的经济体系。绝大多数人将依赖UBI生活,虽然绝对生活水平提高,但相对地位的差异可能导致部分人群感到“贫困”。

⏳ **Autofac Era的终结**:文章预测,Autofac Era的持续时间可能不会太长,预计在5到10年左右。它可能以三种方式结束:一是发生生存危机;二是经济增长达到物理极限(如太阳的能量承载能力),进入停滞期;三是最终实现了AGI的突破,从而进入一个更加危险且不可预测的新时代。

Published on October 3, 2025 4:10 AM GMT

If we don’t build AGI in the next 5 years, I think it’s likely that we end up in what I’m calling the Autofac Era, where much of the economy is automated but humans continue to drive all economic activity as consumers. In this post I’ll explain why I think it might happen, what I expect it to look like, how you can prepare for it, and how it will end.

NB: This is an informal model. I haven’t done extensive research. It’s primarily based on my 25+ years of experience thinking about AI and my read of where we are today and what I think is likely to happen in the next few years. I strongly invite you to poke holes in it, or offer hard evidence in its favor. If you’d prefer to read a better researched model of how AI changes the world, I’d recommend AI 2027.

What is Autofac?

The name Autofac is drawn from a 1955 short story by science fiction author Philip K. Dick. In it, humans live in a world overrun by self-replicating factories that deliver an endless supply of consumer goods. The exact details of the story aren’t important for the model, though, other than inspiring the name, which I chose because it assumes we need to keep the consumer economy going even though most economic goods and services are provided by AI.

My background assumption is that the economy will remain based on human consumption of goods and services. At first this will primarily be because it’s how the economy already works, but later it’ll be because, without AGI, humans are the only source of wanting stuff. Tool AI would be just as happy to sit turned off, consuming no power and doing nothing, so an AI economy without AGI only makes sense, best I can tell, if there’s humans who want stuff to consume.

The development of AGI would obviously break this assumption, as would tool AI that autonomously tries to continue to deliver the same outcomes it was created for even if there were no humans around to actually consume them (a paperclip maximizer scenario, which is surprisingly similar to the Autofacs in PKD’s story).

How does the Autofac Era happen?

To get to the Autofac Era, it has to be that we don’t develop AGI in the next few years. I’m saying 5 years to put a hard number on it, but it could be more or less depending on how various things play out.

I personally think an Autofac scenario is likely because we won’t be able to make the conceptual breakthroughs required to build AGI within the next 5 years, specifically because we won’t be able to figure out how to build what Steve Byrnes has called the steering subsystem, even with help from LLM research assistants. This will leave us with tool-like AI that, even if it’s narrowly superintelligent, is not AGI because it lacks an internal source of motivation.

I put about 70% odds on us failing to solve steering in the next 5 years and thus being unable to build AGI. That’s why I think it’s interesting to think about an Autofac world. If you agree, great, let’s get to exploring what happens in the likely scenario that AGI takes 5+ years to arrive. If you disagree, then think of this model as exploring what you believe to be a low-probability hypothetical.

What will the Autofac Era be like?

Here’s roughly what I expect to happen:

The above is what I view as the “happy” path. There are lots of ways this doesn’t play out the way I’ve described, or plays out in a similar but different way. Maybe people coordinate to push back hard against automation and slow AI automation. Maybe AI enables biological warfare that kills most of humanity. Maybe there’s nuclear exchanges. Maybe AI-enabled warfare damages communication or electrical systems in ways that destroy modern industry. There’s lots of ways the exact scenario I lay out doesn’t happen.

Lots of folks have explored the many risks of both tool-like AI and AGI, and I highly recommend reading their work. In the interest of quantification, if I screen off existential risks from AGI/ASI, I’d place something like 35% odds on not seeing a world that looks basically like the happy path because of some kind of non-existential AI disaster.

I’ve also assumed that we continue with something like a capitalist system. Maybe there’s so much surplus that we have a political revolution and try central planning again, but this time it actually works thanks to AI. Such a world would feel quite a bit different from the scenario I’ve described, but would share many of the core characteristics of my model.

How can I prepare for the Autofac Era?

The best way to prepare is by owning capital, either directly or through investment vehicles like stocks and bonds. I won’t give you any advice on picking winners and losers. I’ll just suggest at least following the default advice of holding a large and diversified portfolio.

You could also try to have the skills and connections necessary to continue to be employed. This is a high risk strategy, as there’ll be a lot of competition for a much more limited number of rules. If you’re not in the top 10% in some way for your preferred role, you’re unlikely to stand a chance. If you pursue this path, have investments as a backup.

You’ll also be fine if you just don’t prepare. Life in the “underclass”, while it will be coded as low status and lock you out of access to luxury goods, you’ll still live a life full of what, by historical standards, would be luxuries. This is perhaps comparable to what happened during the Industrial revolution, except without the tradeoff of the “underclass” having to accept poor working conditions to get access to those luxuries.

That said, many people will find life in the “underclass” depressing. We know that humans care a lot about relative status. If they compare themselves to people with investments or jobs who can afford luxury goods, they may feel bad about themselves. A lot of people who aren’t used to being “poor” will suddenly find themselves in that bucket, even if being “poor” is extremely comfortable. My hope is that people continue to do what they’ve been doing for a while and develop alternative status hierarchies that allow everyone to feel high status regardless of their relative economic station.

How does the Autofac Era end?

I see it ending in one of three ways.

One is that there’s an existential catastrophe. Again, lots of other people have written on this topic, so I won’t get into it.

Another way for the Autofac Era to end is stagnation by the economy growing to the carrying capacity of the Sun. If we never make the breakthrough to AGI, we will eventually transition to a Malthusean period that can only be escaped by traveling to other stars to harness their energy. If that happens, the Autofac Era won’t really end, but a world with no growth would look very different from the one I’ve described.

Finally, the Autofac Era ends if we build AGI. This is the way I actually expect it to end. My guess is that the Autofac Era will only last 5 to 10 years before we succeed in creating AGI, and the onramp to AGI might even be gradual if we end up making incremental progress towards building steering subsystems for AI. At that point we transition to a different, and to be frank, much more dangerous world, because AGI may not care about humans the way tool-like AI implicitly does because they only instrumentally care about what humans care about. For more on such a world, you might read the recently released If Anyone Builds It, Everyone Dies.

This post garnered some good discussion on LessWrong. I recommend reading the comments there.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI AGI Autofac Era 未来经济 自动化 普遍基本收入 UBI 人工智能 经济模型
相关文章