少点错误 前天 21:10
AI 发展中的理论研究与发表困境
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一位经济学教授在撰写关于变革性人工智能(TAI/AGI/Superintelligence)的经济学论文时,遇到了严峻的发表挑战。其关于“p(doom)”(人类灭绝的概率)的论文在投稿给期刊时,经历了多次“直接拒稿”(desk-rejected),理由包括主观概率评估缺乏实证支持、TAI对齐等概念过于抽象、以及对“接管不可逆转”假设的质疑。作者指出,科学界偏爱实证研究,但AI技术发展迅速,实证数据易过时;而对AI潜在风险的理论探讨,因其前瞻性和对“末日场景”的讨论,往往令期刊编辑望而却步。这种困境导致AI风险讨论局限于非正式渠道,缺乏同行评审的严谨性,可能影响未来政策的制定。作者呼吁,面对AI带来的不确定性,学术界应更积极地支持和发表相关前瞻性理论研究,以更审慎的态度引导未来。

🔬 **AI理论研究发表遇阻**:经济学教授关于变革性AI(TAI)及其潜在风险(如人类灭绝概率p(doom))的理论研究,在投稿过程中遭遇了多次直接拒稿。期刊以“主题超出范围”、“主观概率缺乏实证”、“概念过于抽象”及“假设过于绝对”等理由拒绝发表,反映了学术界在处理前沿、理论性强且涉及不确定性的AI风险研究时面临的挑战。

📊 **实证研究的局限与理论的必要性**:文章指出,科学界偏好实证研究,但AI技术迭代速度快,实证数据转瞬即逝,导致研究滞后。然而,为了前瞻性地引导AI发展方向、规避潜在风险,对AI未来可能场景的理论探讨至关重要,期刊应支持这类研究,避免公众和政策制定者认为AI风险不存在。

😨 **“末日论”的编辑心理与科学严谨性**:期刊编辑可能因AI灭绝风险的“末日论”性质而回避发表相关内容,这与人类规避死亡的心理有关。然而,缺乏同行评审的严谨性,可能导致AI风险的讨论仅限于博客和预印本,科学性和逻辑性下降,易被争议和点击诱饵所主导。作者强调,面对AI的威胁与机遇,同行评审是确保科学讨论完整性的最佳工具。

🚀 **呼吁加强前瞻性研究与审慎政策**:尽管存在少数正面案例,但作者呼吁学术界应扩大并主流化AI生存风险的前瞻性研究。在高度不确定的AI时代,不应等待证据确凿再制定政策,而应基于前瞻性情景分析,快速引入审慎政策,以最小化潜在负面影响,确保人类得以延续并继续探索美好未来。

Published on November 3, 2025 1:04 PM GMT

I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology.

Recently, jointly with Klaus Prettner, we’ve written a paper on “The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI”. We have presented it at multiple conferences and seminars, and it was always well received. We didn’t get any real pushback; instead our research prompted a lot of interest and reflection (as I was reported, also in conversations where I wasn’t involved).

But our experience with publishing this paper in a journal is a polar opposite. To date, the paper got desk-rejected (without peer review) 7 times. For example, Futures—a journal “for the interdisciplinary study of futures, visioning, anticipation and foresight” justified their negative decision by writing: “while your results are of potential interest, the topic of your manuscript falls outside of the scope of this journal”. 

Until finally, to our excitement, it was for once sent out for review. But then came the reviews… and sure they delivered. The key arguments for the paper’s rejection were the following:

1/ As regards the core concept of p(doom), Referee 1 complained that “the assignment of probabilities is highly subjective, and it lacks empirical support”. Referee 2 backed this up with: “there is a lack of substantive basic factual support”. Well, yes, precisely. These probabilities are subjective by design, because empirical measurement of p(doom) would have to involve going through all the past cases where humanity lost control of a superhuman AI and consequently became extinct. And hey, sarcasm aside, our central argument doesn’t actually rely on any specific probabilities. We find that in most circumstances even a very small probability of human extinction suffices to justify a call for more investment in existential risk reduction.

2/ Referee 1—the one whose review was longer than four short bashing sentences—complained also that “the definitions of "TAI alignment" and "correctability" [we actually wrote “corrigibility”—JG] are overly abstract, lacking actionable technical or institutional implementation pathways.” Well again, yes, precisely: TAI alignment has not been solved yet, so sure there are no “actionable technical or institutional implementation pathways”. 

3/ We also enjoyed the comment that “the assumption that takeover, once occurring, is irreversible, is overly absolute.” Apparently, we must have missed the fact that in reality John Connor or Ethan Hunt may actually win.

You may think that I am sour and frustrated because the paper was rejected. I sure am, but there’s a much broader point here.

My point is that theoretical papers on the scenarios of transformative AI, both in terms of their promises and (particularly) risks, are extremely hard to publish. You can see that in the resumes of essentially all authors who pivoted to this topic. 

First, journals prefer empirical studies. In all the other contexts, this would be understandable—that’s how the scientific method works after all. However, with AI the problem is that the technology is developing so quickly that all data empirical researchers get a hand on is instantly obsolete. Which means that all empirical research, no matter how brilliant and insightful, is also necessarily backward-looking. We may only begin to understand the economic consequences of GPT-3 while already using GPT-5.

At the same time, if we want to take a proactive stance and at least attempt to guide our policy so that it could steer the future towards desirable states—for example, such that we don’t become an extinct species—we’d better also publish and discuss the various AI scenarios which could potentially unfold, including the less conservative ones (predicting, e.g., “no more than a 0.66% increase in total factor productivity (TFP) over 10 years.”). And research journals should support the debate, or otherwise the public and policymakers would get the impression that the entire economics community believes that TAI/AGI/ASI will for sure never arrive, and AI existential risk does not exist, which is clearly not the consensus view. 

Second, the problem seems to go beyond the preference for empirical papers. It seems that, on top of that, the very notion of AI existential risk scares the editors away. To deny the thoughts of one’s own mortality is a documented psychological phenomenon, and acknowledging extinction risk is probably even scarier. Also, the editors may be tempted to think their journals have nothing to gain by publishing doom scenarios: even if they turn out to be true, there will be no-one left to capitalize on that correct prediction anyway. But the citations don’t always reward those whose predictions are ultimately correct, they come wherever the debate is—and that includes scenarios and viewpoints we may or may not agree with.

Peer review, for all its flaws, is the best tool to ensure the integrity and rigor of scientific discourse about any important issue—and the future of humanity, faced with the imminent threats (and promises) of transformative AI, certainly qualifies as such. And as according to many, transformative AI may arrive within the next 10 years, so the matter is also urgent. If research journals continue to desk-reject this entire debate, our future will be decided based solely on arguments developed in blogposts, videos, and (at best) arXiv papers. Without peer review, this debate risks becoming less and less scientifically sound, and driven more by controversy and clickbait than logic and rigor.

Against this unfortunate background, I am happy to point out the few publications in the official channels that do exist, such as the invited volume by the NBER. I am also happy that Professor Chad Jones of Stanford GSB used his stellar reputation to warn the economists of AI existential risk in a top-tier scholarly journal. But given the stakes at hand, this forward-looking literature needs to be much, much larger, and much more mainstream.

After all, we are living in very uncertain times, and the possible emergence of transformative AI is a prime source of this uncertainty. In such circumstances, we don’t have the time to wait idly until evidence-based policies are established. Instead, we need to quickly introduce basic prudent policies, motivated by forward-looking scenario-based analysis, which could at least minimize the expected downsides, and—at the very bare minimum—allow us to live on and keep thinking about good futures for humanity.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 人工智能 学术出版 经济学 生存风险 同行评审 Transformative AI Existential Risk Academic Publishing Economics Peer Review
相关文章