少点错误 10月01日 22:48
对“悲观化”概念的再思考:行动主义的失败与替代方案
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了“悲观化”这一概念在理解行动主义运动失败时的适用性。作者认为,许多看似“悲观化”的现象,即组织名义上致力于X却导致了非-X,实际上可以通过更传统的失败模式来解释,如普通失误、误判、机会主义以及不可避免的替代选择困境。文章以共产主义、环保主义和AI安全领域为例,分析了这些运动在追求目标过程中遇到的挑战和产生的负面结果,并提出“悲观化”作为一种概念可能弊大于利,容易导致士气低落和逃避深入分析。作者强调,理解行动主义的失败应聚焦于具体的失败模式,而非笼统地套用“悲观化”概念。

💡 **“悲观化”概念的局限性:** 作者认为,将行动主义运动的失败简单归结为“悲观化”(即致力于X却导致非-X)是一种过于简化的视角。许多看似“悲观化”的现象,如共产主义未能实现其理想,环保主义导致意想不到的负面后果,以及AI安全社区的部分成员反而加速了AI的风险发展,都可以通过更具体的失败原因来解释,例如普通失误、策略不当或外部环境影响。

🔄 **失败的替代解释:** 文章提出了几种更具解释力的失败模式。首先是“普通失误”,即行动主义者可能在执行过程中犯下错误,导致事与愿不符。其次是“误判”,即行动主义者可能对事物的运作方式或潜在后果存在根本性的误解。此外,“机会主义者”会利用运动的流行度来实现自身目的,这可能与运动的初衷相悖。最后,作者强调了“替代”的困境:一个失败的运动被另一个运动取代,新的运动可能做得更好,也可能更糟,但失败并不意味着替代者就一定更好。

⚖️ **行动主义的内在困境与复杂性:** 文章指出,行动主义者在尝试改变现状时,往往需要面对复杂的利益群体和不可预测的后果。以环保主义为例,为了某个具体目标(如保护湿地)而联合不同群体,可能会在其他议题上(如太阳能发电)与部分联合者产生冲突。这种“非强制性”的错误选择是行动主义固有的挑战,而非简单的“悲观化”所能涵盖。

🔍 **“悲观化”概念的负面影响:** 作者认为,“悲观化”作为一种概念,容易被滥用,成为一种“精神捷径”,阻碍对具体失败原因的深入分析。它可能导致那些尝试并失败的人感到沮丧,并让旁观者更容易指责行动者。此外,过度强调“悲观化”可能劝退那些愿意尝试并可能带来积极改变的人,因为他们担心一旦失败就会被贴上“悲观化”的标签。

Published on October 1, 2025 1:48 PM GMT

TL;DR I think "Pessimization" is a bad concept to think about in the case of activist movements. I think most of it can be explained by ordinary failures, and that using pessimization as a concept is worse than just thinking about the ordinary causes of failure on an object level.

Pessimization

"Perverse Pessimization", says Richard Ngo, is when an organization nominally dedicated to X ends up causing not-X. This is particularly common amongst activist groups according to Richard, and he has a separate thread calling it "The Activist's Curse"

Two of his choice examples are pretty weird to me (Liberalism? Liberalism?? The ideology which took over the half the world and reigned supreme for at least half a century, which is widely agreed to be the best half-century ever. Is the "pessimization" of liberalism that eventually people stopped doing liberalism? And I don't see how transhumanism has made us any less transhuman) so I'll use the examples I understand.

Communism:[1] the Bolsheviks wanted to remove the power that a smallish aristocratic class had over them, but ended up creating an even smaller and more tyrannical ruling class.

Environmentalism: environmentalists wanted to make the environment better, but they fucked up so badly by being anti-nuclear that they ended up making everything worse.

AI Safety: the AI Safety community wanted to make people concerned about AI, so they wouldn't recklessly build AI that kills everyone, but many of the companies recklessly building AI came out of the AI Safety community.

Losing

So there's a mystery here: communities trying to cause X end up causing Not-X. Or do they? Here's another hypothesis: 

Activists often identify a problem Y in some area. They get really involved in that area, and sometimes don't make Y any better or worse. If you look from the outside, it seems like they were "involved" with the people who made Y worse. If the activists didn't exist, Y wouldn't have been much better or worse.

Communism: the Bolsheviks identified the problem of a tyrannical aristocracy. They got rid of the Tsar, but were unable to prevent a different tyrannical ruling class from taking its place.

Environmentalism: environmentalists noticed that policy did not systematically focus on environmental concerns. They changed a lot of policy in different ways, but the poor quality of the evidence-to-policy pipeline amplified the NIMBY voices (particularly for nuclear power) more than it amplified the voices saying nuclear was a good idea.

AI Safety: early AI Safety advocates identified the problem that AI was dangerous and companies would be reckless. They tried to tell people about the problem, but some people didn't listen and built those companies anyway. 

Being Wrong

Or maybe there's a third option.

Activists are sometimes wrong. In some domains, one big mistake can wipe out all your good efforts. The universe is just unfair like that.

Communism: the Bolsheviks misunderstood how economics and power works, so their policies just killed a bunch of people.

Environmentalism: environmentalists made a few wrong calls on nuclear, and also often make wrong calls on local vs wider issues. Because of the way our laws are set up, these NIMBYist mistakes get amplified and mess everything up.

AI Safety: the AI Safety people under-estimated how many people would get a distorted version of their message, and these people heard "AI is a huge ..." instead of "AI is a huge threat"

Opportunists

Any movement which gains popularity will be used as a cudgel in political disputes. Sometimes this will go against the stated aims of X, sometimes critically.

Communism: Stalin used "Communism" as an excuse to become an awful dictator, because communism was the most popular ideology of the day.

Environmentalism: people who have never once in their life thought about environmentalism mysteriously become concerned about "Environmentalism" the moment anything is about to be built near them.

AI Safety: opportunistic businessmen used "AI Safety" as a way to get money for their causes, while not caring about AI safety at all.

This is kind of similar to one of Richard's points about sociopaths.

Replacement

I think Richard makes an important error when he complains about existing activist-ish groups: he compares these groups to an imaginary version of the activist group which doesn't make any mistakes. Richard seems to see all mistakes made by activist groups as unforced and indicative of deep problems or malice. 

If you're running an activist group, you actually only have two options: change your policies to take in or exclude some members on some front, or cease to exist. I'll illustrate with environmentalism: 

Scenario:

Suppose you're protesting an energy company buying up ecologically-significant wetlands to build a big oil refinery, because it would increase carbon emissions. You have to choose which of the following people to let into your coalition:

    People who care about carbon emissionsPeople who care about biodiversity in the abstractPeople who like "the countryside"People who live nearby and just don't like noisePeople who hate big companies basically whatever they do

You don't know if you need two of these groups on your side, or five. Do you pick all five to be safe, or just pick the best two? Or hedge with three?

And what about what comes after?
A: A green-energy company might be buying up farmland in a completely different area in order to build solar panels.
B: Some farmers might be lobbying for some moors to be converted to a farm
C: A company might trying to put an offshore wind farm near some cliffs where birds are nesting.

Maybe you're in favour of the solar panels in case A, and would end up lobbying against people from group 3 in that case. But maybe group 3 is huge and the only way to counter the farmers in scenario B.

This is the sort of dilemma you end up in when you do activism. It's difficult, and errors are usually not-unforced.

Your other choice as an activist group is to shut down and re-roll. In that case, maybe someone comes along and does better, or maybe they do worse. Even if your activist group is doing badly, that doesn't mean that a random replacement would have done better.

Synthesis

I think my view is something like this:

Activist groups often try to swim against structural forces, which involves getting mixed up in an area where they have lots of adversaries. Sometimes, they just lose, and those adversaries win. Sometimes the activist group makes a mistake which, due to the adversarial conditions, ends up doing a lot of harm. Sometimes, opportunists use the activists' cause as a means of achieving what they were doing already. None of this is strong evidence that the activists were actually worse than replacement.

To put it all together:

The Bolsheviks wanted to end the tyranny of a small ruling class. This was inherently difficult, because countries in general, and Russia especially, has a very powerful attractor state where there is a small, tyrannical ruling class. They made some serious mistakes about how to set up a system of good government, which made everything worse. Their ideology, once popular, was exploited by opportunists who wanted to take as much power as possible. Overall, they were probably worse than the average replacement revolution (Stalin was really bad) but not the worst possible revolutionary movement (they did lead to Gorbachev, who led to a period of democracy where the country seemed to have some hope).

Environmentalists wanted to make the world and its policymakers care about the environment. This was inherently difficult because improving the environment (in the ways environmentalists care about) is not very attractive as a policy system. They made some serious mistakes about which technologies were net good vs net bad (nuclear) and this interacted with structural biases in policies (towards NIMBYism) and opportunists (general NIMBYs). Overall, I'm unclear on whether organized university-activist-driven environmentalism was more or less effective than whatever would have taken its place.[1]

AI Safety people wanted to stop an uncoordinated race to AI which kills everyone. This was inherently difficult because security mindset is rare, the world is badly coordinated, and AI safety is very difficult. The incentives for companies to lobby against AI regulation are strong. They made some mistakes about how their message would come across (many people listened to the part about AI being big, but didn't grasp the difficulties of making it safe) which led to more attention overall on AI. Some of these people were opportunists who claimed "AI Safety" to get money for companies, and many grantmakers were duped by these claims. I think the AI Safety community (going back to ~2004) have performed well above replacement because the replacement community would be made up of AI-optimist futurists and not even really notice the risks until we smashed headlong into them. Humanity would likely die with even less dignity in such a world.

Red-Thing-Ism 

Apologies for psychoanalyzing Richard Ngo, but he's the main person who's using this concept in practice.

I suspect that Richard has been a bit guilty of red-thing-ism when it comes to the activist's curse and pessimization. I don't think that the features of pessimization are well-formed enough to be thrown around as a solid concept. Here he, having been called out for a very wrong position (that western elites generally support Hamas) says it fits neatly into his model of pessimization.

I think it's a big strike against pessimization if the originator of the concept is using it as a gear and that gear ends up being a vector for twitter mind-killing. The concept can very easily slide into a mental shortcut: "Of course these activists/elites did something dumb and bad, it's pessimization!".

End of psychoanalysis, nothing else here is aimed at Richard.

This reminds me a bit of a different case. If you're rich, and you have a different opinion on some issue to a poor person, you can expect to be hit by cries of "Luxury belief!" In the same way, if you're trying and failing to achieve something, you can basically always be hit by accusations of "Pessimization!".

If the concept becomes popularized, I think it will have the following effects:

    Demoralizing people who try and fail to do things, since people who've done nothing will be able to constantly say "Ah, you've pessimized, making things worse, where as I who wisely stayed home, have not made anything worse."Pushing people to think less about why they have failed, or might fail, since it's just "Pesismization" which needs no further explanation.Pushing people who read more to do less, meaning the sorts of people who might actually do better-than-replacement work are less likely to even try, since "pessimization" will just come and get them. 

So I don't think that pessimization (as the concept exists now) is a great gear to add to one's world-model.

Coda

On the other hand, the possible explanations for this "effect" are worth thinking about! If you're doing any activist-ish stuff, you should absolutely be looking through the twelve things Richard talks about in the thread. These are real failure modes, but they're just failure modes.

If you set out to do something hard, you should expect to find failure modes. But that doesn't mean the better option is to just lie down and give up.

  1. ^

    I find it hard to fully blame environmentalism for the state of nuclear power in the world. For example, the UK also cannot build reservoirs or housing at the moment, even in cases where environmentalism doesn't cause problems (see the new fire authority, or rules surrounding having windows on two sides of every flat)



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

行动主义 悲观化 失败模式 社会运动 AI安全 环保主义 共产主义 Activsim Pessimization Failure Modes Social Movements AI Safety Environmentalism Communism
相关文章