少点错误 10月14日 07:41
探讨LessWrong的独特价值与优势
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了LessWrong社区的独特之处及其在预测和理解复杂议题上的过人之处。文章提出几种理论解释,包括其在认知理性方面的侧重,而非工具理性,使其在面对前所未有的、禁忌或需要广泛推理能力的议题时表现出色。此外,文章还考虑了社区的选址效应,即吸引了对宏大问题、未来和理性思考感兴趣的聪明人。通过对AI、大流行病和加密货币等案例的分析,文章指出LessWrong的优势可能源于其成员的特定兴趣和早期关注,而非仅仅是“更理智”。最后,作者邀请读者讨论哪种理论最能解释LessWrong的成功。

💡LessWrong在认知理性(epistemic rationality)方面表现突出,而非工具理性(instrumental rationality)。这种侧重使其在处理前所未有的、具有社会禁忌性或需要广泛推理能力的问题时,能够产生更深刻的见解,例如在AI发展、大流行病预警等方面展现出比其他群体更早的洞察力。

👥社区的选址效应(selection effects)是另一关键因素。LessWrong吸引了对宏大问题、未来趋势和理性思考高度感兴趣的聪明人。这种群体特质使得他们能够比一般人群更准确地预测和理解某些领域的进展,例如在AI能力和加密货币等新兴领域。

📈LessWrong的成功并非偶然,而是源于其成员对特定议题(如生存风险)的深切关注和早期投入。例如,社区对流行病的关注源于其对生存风险的普遍兴趣,使其能更早地注意到相关信号并采取相应行动,这种“关注”而非“更理智”的特质是其优势所在。

🚀社区的早期创始人和核心成员(如EY和Robin Handsome)对宏大、重要问题的强调,塑造了社区的兴趣导向,从而在特定领域(如AI、预测市场)积累了独特的知识和预测能力,使其在这些领域取得了显著的成功。

Published on October 13, 2025 11:30 PM GMT

If you want to learn something, usually the best sources are far from Lesswrong. If you're interested in biochemistry, you should pick up a textbook. Or if you're interested in business, find a mentor who gets the business triad and throw stuff at the wall till you know how to make money.

And yet, Lesswrong has had some big hits. For instance, if you just invested in everything Lesswrong thought might be big over the past 20 years, you'd probably have outperformed the stock market. And while no one got LLMs right, the people who were the least wrong seemed to cluster around Less Wrong. Heck, even super forecasters kept underestimating AI progress relative to Lesswrong. There's also Covid, where Lesswrong picked up on signs unusually early.

So Lesswrong plausibly has got some edge. Only, what is it?

Theory 1: Lesswrong stacked all its points into general epistemic rationality, and relatively few into instrumental rationality. 

This is not a good fit for areas which have stable structures, low complexity and fast, low noise, cheap feedback loops. E.g. computer programming, condensed matter physics etc. Neither is it a good fit for areas which require focusing on what's useful, rather than what's true. E.g. business, marketing, politics etc.  

It is useful for: things that have never happened before, are socially taboo to talk about, or require general reasoning ability.

I think this theory has some merit. It explains the aforementioned hits and misses of Lesswrong fairly well. And other hits like the correspondence theory of truth, subjective view of probability, bullishness on prediction markets etc. And, perhaps, also failures involving getting the details right, as that involves tight coupling to reality (?). 

But one must beware the man of one theory. 

Theory 2: Selection effects. Lesswrong selected for smart people.

This implies other smart groups should've done as well as Lesswrong. Did they? Take forecasters. I don't think forecasters outperformed Lesswrong on big AI questions, like whether GPT-4 would be so capable. That said, they do mostly match or exceed Lesswrong in the details. Or take physicists. As far as I'm aware, the physics community didn't circulate early warnings about Covid. (A potential test: did CS professors notice the impact and import of crypto early on?)

Conversely, Lesswrong had some fads that typical smart people didn't. Like nootropics, which basically don't work besides stimulants. 

Theory 2.1: Theory 2 + Lesswrong selected for interested in big questions, the future and reasoning.

In other words, Lesswrong is a bunch of smart people with idiosyncratic interests and they do well at guessing what is going to happen there than other groups. Likewise, other groups of smart folks will do better than the norm at their own autistic special interests. E.g. a forum of smart body builders would know the best ways to get huge.

Consider Covid in this context. Lesswrong, and EAs, are very interested in existential risks. Pandemics are one such risk. So Lesswrong was primed to pay attention to signs of a big potential pandemic and take action accordingly. One nice feature of this theory is it doesn't predict that Lesswrong did better at predicting how the stock market would react to Covid. IIRC, we were all surprised at how well it did.

So it isn't so much a matter of "being more sane" than actually bothering to pay attention. Like Crypto. Wei Dai, Hal Finney and others were important early contributors to Lesswrong, and got the community interested in the topic. Lesswrong noticed had a chance to go "yeah, this makes sense" when other groups didn't. Yes, many didn't. But relatively speaking, I think Lesswrong did well. Though this was before my time on this site, and I'm relying on hearsay.

Perhaps an issue: why did Lesswrong pay attention to the big questions? Perhaps that's because of founder effects. EY and Robin Handsome emphasized big, important questions, which shaped the community's interests accordingly.

Which theory is right? I'm not sure. For one, these theories aren't mutually exclusive. Personally, I am a bit doubtful of theory 1, in part because it plays to my ego. Plus, it's suspicious that I can only point to a few clear, big epistemic wins. 

Of course, I could spend 5 minutes actually thinking about tests that discriminate between these theories. But I've got to get this post done soon, and I think you all probably have more ideas and data that I'm missing. So, what is Lesswrong good for, and why? 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LessWrong 认知理性 工具理性 预测 AI 生存风险 Epistemic Rationality Instrumental Rationality Prediction AI Existential Risk
相关文章