Fortune | FORTUNE 10月09日 01:13
23岁青年AI研究员的崛起与争议
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文聚焦23岁的Leopold Aschenbrenner,这位青年AI研究员在AI领域经历了令人瞩目的快速崛起。他曾短暂效力于FTX的慈善部门和OpenAI,后被OpenAI解雇。然而,他迅速发布了一篇引发广泛关注的AI宣言,并以此为跳板,创立了一个目前管理超过15亿美元的对冲基金。他的迅速成功引发了关于其能力、影响力和行业现状的讨论,有人视他为AI时代的先知,也有人质疑其是否只是将行业热潮包装成投资策略。Aschenbrenner的经历也折射出硅谷如何将时代精神转化为资本,并进而获得影响力。

🌟 **AI领域的“神童”与争议人物**:Leopold Aschenbrenner,一位年仅23岁的德国青年,在AI领域迅速崭露头角,并引发广泛关注。他曾在Sam Bankman-Fried的FTX慈善部门和OpenAI短暂工作,后被OpenAI解雇。尽管如此,他迅速发布了一篇引起轰动的AI宣言,并以此为基础,成功创立了一个管理着超过15亿美元资产的对冲基金。他的快速崛起,尤其是在金融领域的成就,让许多人惊叹,但也引发了对其真实能力和动机的质疑,有人视其为AI时代的先知,也有人认为他只是善于包装和时机把握。他的经历也暴露了行业内“神童”现象以及资本如何快速涌入AI领域。

🚀 **从AI宣言到金融帝国**:Aschenbrenner的成名之路始于他自发的165页长文《Situational Awareness: A Decade Ahead》。在这篇被誉为AI时代的“长电报”的文章中,他预言了类人通用人工智能(AGI)的快速到来,并强调了美国在AI竞赛中可能落后的风险。这篇文章迅速在政商界传播,获得了包括Ivanka Trump在内的公众人物的赞赏。他将这篇宣言视为“变革理论”,利用其影响力创立了Situational Awareness LP对冲基金,该基金专注于投资可能从AI浪潮中受益的公开上市公司,并取得了显著的早期回报,管理资产迅速增长至15亿美元以上,上半年实现了47%的收益。

💡 **“情境意识”的包装与影响力**:Aschenbrenner的独特之处在于他能够精准捕捉硅谷的时代精神,并将前沿AI研究的趋势提炼成一个连贯且引人入胜的叙事。他的宣言将AI安全研究中的“情境意识”概念,转化为对AGI到来紧迫性的强调,并巧妙地将其与国家安全和经济利益挂钩。尽管一些前OpenAI同事认为他的想法并非新颖,但他的“包装”能力极强,能够将复杂的内部观点转化为易于理解、具有吸引力的内容,尤其擅长抓住“抓住时机”和“中美AI竞赛”等热门话题。这种能力使他能够吸引投资者,并获得在科技巨头、投资者和政策制定者中的影响力。

💰 **对冲基金的策略与早期成功**:Situational Awareness LP对冲基金的策略是投资那些可能从AI发展中受益的全球股票,如半导体、基础设施和电力公司,同时做空可能落后的行业。其公开的持仓包括英特尔、博通等公司,这些投资在近期市场波动中表现出色。基金在成立初期就吸引了包括Nat Friedman、Daniel Gross和Stripe联合创始人Patrick和John Collison在内的知名投资者。尽管其早期表现可能受到市场时机和运气的影响,但其迅速增长的资产规模和可观的回报,以及Aschenbrenner近乎全部个人净值投入的决心,都使其成为一个值得关注的投资案例。资深对冲基金投资者Graham Duncan认为,Aschenbrenner和他的团队展现了“变异性认知”,能够像《大空头》中的人物一样,在市场主流观点之外发现机会。

Of all the unlikely stories to emerge from the current AI frenzy, few are more striking than that of Leopold Aschenbrenner.

The 23-year-old’s career didn’t exactly start auspiciously: He spent time at the philanthropy arm of Sam Bankman-Fried’s now-bankrupt FTX cryptocurrency exchange before a controversial year at OpenAI, where he was ultimately fired. Then, just two months after being booted out of the most influential company in AI, he penned an AI manifesto that went viral—President Trump’s daughter Ivanka even praised it on social media—and used it as a launching pad for a hedge fund that now manages more than $1.5 billion. That’s modest by hedge-fund standards but remarkable for someone barely out of college. Just four years after graduating from Columbia, Aschenbrenner is holding private discussions with tech CEOs, investors, and policymakers who treat him as a kind of prophet of the AI age.

It’s an astonishing ascent, one that has many asking not just how this German-born early-career AI researcher pulled it off, but whether the hype surrounding him matches the reality. To some, Aschenbrenner is a rare genius who saw the moment—the coming of human-like artificial general intelligence, China’s accelerating AI race, and the vast fortunes awaiting those who move first—more clearly than anyone else. To others, including several former OpenAI colleagues, he’s a lucky novice with no finance track record, repackaging hype into a hedge fund pitch. 

His meteoric rise captures how Silicon Valley converts zeitgeist into capital—and how that, in turn, can be parlayed into influence. While critics question whether launching a hedge fund was simply a way to turn dubious techno-prophecy into profit, friends like Anthropic researcher Sholto Douglas frame it differently—as a “theory of change.” Aschenbrenner is using the hedge fund to garner a credible voice in the financial ecosystem, Douglas explained: “He is saying, ‘I have an extremely high conviction [that this is] how the world is going to evolve, and I am literally putting my money where my mouth is.” 

But that also begs the question: why are so many willing to trust this newcomer?

The answer is complicated. In conversations with over a dozen friends, former colleagues, and acquaintances of Aschenbrenner, as well as investors and Silicon Valley insiders, one theme keeps surfacing: that Aschenbrenner has been able to seize ideas that have been gathering momentum across Silicon Valley’s labs and use them as ingredients for a coherent and convincing narrative that are like a blue plate special to investors with a healthy appetite for risk.

Aschenbrenner declined to comment for this story. A number of sources were granted anonymity due to concerns about the potential consequences of speaking about people who wield considerable power and influence in AI circles.

Many spoke of Aschenbrenner with a mixture of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” More than one described him as carrying the aura of a wunderkind, the kind of figure Silicon Valley has long been eager to anoint. Others, however, noted that his thinking wasn’t especially novel, just unusually well-packaged and well-timed. Yet, while critics dismiss him as more hype than insight, investors Fortune spoke with see him differently, crediting his essays and early portfolio bets with unusual foresight.

There is no doubt, however, that Aschenbrenner’s rise reflects a unique convergence: vast pools of global capital eager to ride the AI wave; a Valley enthralled by the prospect of achieving artificial general intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI development as a technological arms race with China. 

Sketching the future

Within certain corners of the AI world, Leopold Aschenbrenner’s name was already familiar as someone who had written blog posts, essays, and research papers that circulated among AI safety circles, even before joining OpenAI. But for most people, he appeared seemingly overnight in June 2024. That’s when he self-published online a 165-page monograph called Situational Awareness: A Decade Ahead. The long essay borrowed for its title a phrase already familiar in AI circles, where “situational awareness” usually refers to models becoming aware of their own circumstances—a safety risk. But Aschenbrenner used it to mean something else entirely: the need for governments and investors to recognize how quickly AGI might arrive, and what was at stake if the U.S. fell behind.

In a sense, Aschenbrenner intended his manifesto to be the AI era’s equivalent of George Kennan’s “long telegram,” in which the American diplomat and Russia expert sought to awaken elite opinion in the U.S. to what he saw as the looming Soviet threat to Europe. In the introduction, Aschenbrenner sketched a future he claimed was visible only to a few hundred prescient people, “most of them in San Francisco and the AI labs.” Not surprisingly, he included himself among those with “situational awareness,” while the rest of the world had “not the faintest glimmer of what is about to hit them.” To most, AI looked like hype or, at best, another internet-scale shift. What he insisted he could see more clearly was that LLMs were improving at an exponential rate, scaling rapidly towards AGI, and then beyond, to “superintelligence”—with geopolitical consequences and, for those who moved early, the chance to capture the biggest economic windfall of the century. 

To drive the point home, he invoked the example of Covid in early 2020—arguing that only a few grasped the implications of a pandemic’s exponential spread, understood the scope of the coming economic shock, and profited by shorting before the crash. “All I could do is buy masks and short the market,” he wrote. Similarly, he emphasized that only a small circle today comprehends how quickly AGI is coming, and those who act early stand to capture historic gains. And once again, he cast himself among the prescient few. 

But the core of Situational Awareness’s argument wasn’t the Covid parallel. It was the argument that the math itself—the scaling curves that suggested AI capabilities increased exponentially with the amount of data and compute thrown at the same basic algorithms—showed where things were headed. 

Douglas, now a tech lead on reinforcement learning scaling at Anthropic, is both a friend and former roommate of Aschenbrenner’s who had conversations with him about the monograph.  He told Fortune that the essay crystallized what many AI researchers had felt. ”If we believe that the trend line will continue, then we end up in some pretty wild places,” Douglas said. Unlike many who focused on the incremental progress of each successive model release, Aschenbrenner was willing to “really bet on the exponential,” he said.

An essay goes viral

Plenty of long, dense essays about AI risk and strategy circulate every year, most vanishing after brief debates in niche forums like LessWrong, a website founded by AI theorist and ‘doomer’ extraordinaire Eliezer Yudkowsky that became a hub for rationalist and AI-safety ideas. 

But Situational Awareness hit different. Scott Aaronson, a computer science professor at UT Austin who spent two years at OpenAI overlapping with Aschenbrenner, remembered his initial reaction: “Oh man, another one.” But after reading, he told Fortune, “I had the sense that this is actually the document some general or national security person is going to read and say: ‘This requires action.’” In a blog post, he called the essay  “one of the most extraordinary documents I’ve ever read,” saying Aschenbrenner “makes a case that, even after ChatGPT and all that followed it, the world still hasn’t come close to ‘pricing in’ what’s about to hit it.”

A longtime AI governance researcher described the essays as “a big achievement,” but emphasized that the ideas were not new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The result was to make insider thinking legible to a much broader audience at a fever-pitch moment in the AI conversation.

Among AI safety researchers, who worry primarily about the ways in which AI might pose an existential risk to humanity, the essays were more divisive. For many, Aschenbrenner’s work felt like a betrayal, particularly because he had come out of those very circles. They felt their arguments urging caution and regulation had been repurposed into a sales pitch to investors. “People who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” said one former OpenAI governance researcher. Others agreed with most of his predictions and saw value in amplifying them.

Still, even critics conceded his knack for packaging and marketing. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” said another former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”

That timing made the essays unavoidable. Tech founders and investors shared Situational Awareness with the sort of urgency usually reserved for hot term sheets, while policymakers and national security officials circulated it like the juiciest classified NSA assessment.

As one current OpenAI staffer put it, Aschenbrenner’s skill is “knowing where the puck is skating.”

A sweeping narrative paired with an investment vehicle

At the same time as the essays were released, Aschenbrenner launched Situational Awareness LP, a hedge fund built around the theme of AGI, with its bets placed in publicly traded companies rather than private startups. 

The fund was seeded by Silicon Valley heavyweights like investor and current Meta AI product lead Nat Friedman–Aschenbrenner reportedly connected with him after Friedman read one of his blog posts in 2023–as well as Friedman’s investing partner Daniel Gross, and Patrick and John Collison, Stripe’s co-founders. Patrick Collison reportedly met Aschenbrenner at a 2021 dinner set up by a connection “to discuss their shared interests.” Aschenbrenner also brought on Carl Shulman—a 45-year-old AI forecaster and governance researcher with deep ties in the AI safety field and a past stint at Peter Thiel’s Clarium Capital–to be the new hedge fund’s director of research. 

In a four-hour podcast with Dwarkesh Patel tied to the launch, Aschenbrenner touted the explosive growth he expects once AGI arrives, saying “the decade after is also going to be wild,” in which “capital will really matter.” If done right, he said, “there’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x.”

Together, the manifesto and the fund reinforced one another: Here was a book-length investment thesis paired with a prognosticator with so much conviction he was willing to put serious money on the line. It proved an irresistible combination to a certain kind of investor. One former OpenAI researcher said Friedman is known for “zeitgeist hacking” —backing people who could capture the mood of the moment and amplify it into influence. Supporting Aschenbrenner fit that playbook perfectly.

Situational Awareness’ strategy is straightforward: It bets on global stocks likely to benefit from AI—semiconductors, infrastructure, and power companies—offset by shorts on industries that could lag behind. Public filings reveal part of the portfolio: A June SEC filing showed stakes in U.S. companies including Intel, Broadcom, Vistra and former bitcoin-miner Core Scientific (which Coreweave announced it would acquire in July), all seen as beneficiaries of the AI buildout. So far, it has paid off: the fund quickly swelled to over $1.5 billion in assets and delivered 47% gains, after fees, in the first half of this year.

According to a spokesperson, Situational Awareness LP has global investors, including West Coast founders, family offices, institutions and endowments. In addition, the spokesperson said Aschenbrenner “has almost all of his net worth invested in the fund.”

To be sure, any picture of a U.S. hedge fund’s holdings is incomplete. The publicly available 13F filings only cover long positions in U.S.-listed stocks—shorts, derivatives, and international investments aren’t disclosed—adding an inevitable layer of mystery around what the fund is really betting on. Still, some observers have questioned whether Aschenbrenner’s early results reflect skill or fortunate timing. For example, his fund disclosed roughly $459 million in Intel call options in its first-quarter filing—positions that later looked prescient when Intel’s shares climbed over the summer following a federal investment and a subsequent $5 billion stake from Nvidia.

But at least some experienced financial industry professionals have come to view him differently. Veteran hedge-fund investor Graham Duncan, who invested personally in Situational Awareness LP and now serves as an advisor to the fund, said he was struck by Aschenbrenner’s combination of insider perspective and bold investment strategy. “I found his paper provocative,” Duncan said, adding that Aschenbrenner and Shulman weren’t outsiders scanning opportunities but insiders building an investment vehicle around their view. The fund’s thesis reminded him of the few contrarians who spotted the subprime collapse before it hit—people like Michael Bury, who Michael Lewis made famous in his book The Big Short. “If you want to have variant perception, it helps to be a little variant.”

He pointed to Situational Awareness’ reaction to Chinese startup DeepSeek’s January release of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities despite limited funding and export controls. While most investors panicked, he said Aschenbrenner and Shulman had already been tracking it and saw the sell-off as an overreaction. They bought instead of sold, and even a major tech fund reportedly held back from dumping shares after an analyst said, “Leopold says it’s fine.” That moment, Duncan said, cemented Aschenbrenner’s credibility—though Duncan acknowledged “he could yet be proven wrong.” 

Another investor in Situational Awareness LP, who manages a leading hedge fund, told Fortune that he was struck by Aschenbrenner’s answer when asked why he was starting a hedge fund focused on AI rather than a VC fund, which seemed like the most obvious choice.

“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he said. “I am a bit stunned by how fast they have come up the learning curve…they are way more sophisticated on AI investing than anyone else I speak to in the public markets.“ 

A Columbia ‘whiz-kid’ who went on to FTX and OpenAI

Aschenbrenner, born in Germany to two doctors, enrolled at Columbia when he was just 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was still an undergraduate. 

“I heard about him as, ‘oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,” she said. “The vibe was very much a whiz-kid sort of thing.”

That wunderkind reputation only deepened. At 17, Aschenbrenner won a grant from economist Tyler Cowen’s Emergent Ventures, and Cowen called him an “economics prodigy.” While still at Columbia, he also interned at the Global Priorities Institute, co-authoring a paper with economist Phillip Trammell, and contributed essays to Works in Progress, a Stripe-funded publication that gave him another foothold in the tech-intellectual world.

He was already embedded in the Effective Altruism community—a controversial philosophy-driven movement influential in AI safety circles —and co-founded Columbia’s EA chapter. That network eventually led him to a job at the FTX Futures Fund, a charity founded by cryptocurrency exchange founder Sam Bankman-Fried. Bankman-Fried was another EA adherent who donated hundreds of millions of dollars to causes, including AI governance research, that aligned with EA’s philanthropic priorities. 

The FTX Futures Fund was designed to support EA-aligned philanthropic priorities, although it was later found to have used money from Bankman-Fried’s FTX cryptocurrency exchange that was essentially looted from account holders. (There is no evidence that anyone who worked at the FTX Futures Fund knew the money was stolen or did anything illegal.)

At the FTX Futures Fund, Aschenbrenner worked with a small team that included William MacAskill, a co-founder of Effective Altruism, and Avital Balwit—now chief of staff to Anthropic CEO Dario Amodei and, according to a Situational Awareness LP spokesperson, currently engaged to Aschenbrenner. Balwit wrote in a June 2024 essay that “these next five years might be the last few years that I work,” because AGI might “end employment as I know it”–a striking mirror image of Aschenbrenner’s conviction that the same technology will make his investors rich.

But when Bankman-Fried’s FTX empire collapsed in November 2022, the Futures Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner told Dwarkesh Patel. “That was incredibly tough.”

Just months after FTX collapsed, however, Aschenbrenner reemerged — at OpenAI. He joined the company’s newly-launched “superalignment” team in 2023, created to tackle a problem no one yet knows how to solve: how to steer and control future AI systems that would be far smarter than any human being, and perhaps smarter than all of humanity put together. Existing methods like reinforcement learning from human feedback (RLHF) had proven somewhat effective for today’s models, but they depend on humans being able to evaluate outputs — something which might not be possible if systems surpassed human comprehension.

Aaronson, the UT computer science professor, joined OpenAI before Aschenbrenner and said what impressed him was Aschenbrenner’s instinct to act. Aaronson had been working on watermarking ChatGPT outputs to make AI-generated text easier to identify. “I had a proposal for how to do that, but the idea was just sort of languishing,” he said. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’” 

Others remembered him differently, as politically clumsy and sometimes arrogant. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” said one current OpenAI researcher. A former OpenAI policy staffer, who said he first became aware of Aschenbrenner when he gave a talk at a company all-hands meeting that previewed themes he would later publish in Situational Awareness, recalled him as “a bit abrasive.” Multiple researchers also described a holiday party where, in a casual group discussion, Aschenbrenner told then Scale AI CEO Alexandr Wang how many GPUs OpenAI had— “just straight out in the open,” as one put it. Two people told Fortune they had directly overheard the remark. A number of people were taken aback, they explained, at how casually Aschenbrenner shared something so sensitive. Through spokespeople, both Wang and Aschenbrenner denied that the exchange occurred

In April 2024, OpenAI fired Aschenbrenner, officially citing the leaking of internal information (the incident was not related to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “leak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three external researchers for feedback–something he said was “totally normal” at OpenAI at the time.  He argued that an earlier memo in which he said OpenAI’s security was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the real reason for his dismissal. 

According to news reports, OpenAI did respond, via a spokesperson, that the concerns about security that he raised internally (including to the board) “did not lead to his separation.” The spokesperson also said they “disagree with many of the claims he has since made” about OpenAI’s security and the circumstances of his departure.

Either way, Aschenbrenner’s ouster came amid broader turmoil: Within weeks, OpenAI’s “superalignment” team—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and where Aschenbrenner had worked—dissolved after both leaders departed the company.

Two months later, Aschenbrenner published Situational Awareness and unveiled his hedge fund. The speed of the rollout prompted speculation among some former colleagues that he had been laying the groundwork while still at OpenAI.

Returns vs. rhetoric

Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling today’s AGI hype, but still, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” said a former OpenAI colleague who is now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”

Others question the ethics of profiting from AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the US-China race or raising money based off AGI hype, even if the hype is justified,” said one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” said another. 

One former strategist within the Effective Altruism community said many in that world “are annoyed with him,” particularly for promoting the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” While profiting from stoking the idea of an arms race can be rationalized—since Effective Altruists often view making money for the purpose of then giving it away as virtuous—the former strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries more moral weight.

The deeper worry, said Aaronson, is that Aschenbrenner’s message—that the U.S. must accelerate the pace of AI development at all costs in order to beat China—has landed in Washington at a moment when accelerationist voices like Marc Andreessen, David Sacks and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson said. If so, his biggest legacy may not be a hedge fund, but a broader intellectual framework that is helping to cement a technological Cold War between the U.S. and China. 

If that proves true, Aschenbrenner’s real impact may be less about returns and more about rhetoric—the way his ideas have rippled from Silicon Valley into Washington. It underscores the paradox at the center of his story: To some, he’s a genius who saw the moment more clearly than anyone else. To others, he’s a Machiavellian figure who repackaged insider safety worries into an investor pitch. Either way, billions are now riding on whether his bet on AGI delivers.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Leopold Aschenbrenner AI 对冲基金 AGI 人工智能 硅谷 投资 Leopold Aschenbrenner AI Hedge Fund AGI Artificial Intelligence Silicon Valley Investment
相关文章