Fortune | FORTUNE 6小时前
AI大牛Karpathy:AGI发展速度被夸大,需十年以上
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

前OpenAI高管Andrej Karpathy在一次采访中表示,当前人工智能(AI)领域对通用人工智能(AGI)的期望过高,实际发展速度远低于市场炒作。他认为,尽管大型语言模型(LLMs)取得了显著进步,但AGI的实现仍需至少十年时间。Karpathy批评许多公司夸大AI的自主能力,这种不切实际的宣传可能会损害该领域的发展。他指出,当前的AI模型在规划、推理和安全设计等方面仍存在巨大挑战,尤其是在AI代理(agents)方面,其可靠性和安全性仍需大幅提升,否则可能导致软件系统充斥低质量代码并增加安全风险。尽管如此,他仍对AI的长期发展持乐观态度,认为技术挑战是可控的。

🤖 **AGI发展速度被低估:** Karpathy认为,尽管大型语言模型(LLMs)进步迅速,但通用人工智能(AGI)的实现仍需十年以上,远超许多行业预测,他称许多公司夸大了AI的自主能力,并形容当前模型“不堪入目”。

💡 **AI代理(Agents)的局限性:** Karpathy对当前AI代理的执行能力表示担忧,认为它们在自主性方面被过度宣传。他指出,这些系统在可靠性、推理能力、对软件环境的感知以及工具使用方面存在严重不足,可能导致软件质量下降和安全风险增加。

📉 **衡量标准的误导性:** Karpathy批评了当前衡量AI能力的指标,如公开演示、基准测试和代码生成等,认为它们往往反映的是狭窄领域的优化,而非解决AI领域最棘手的长期规划、结构化推理和安全设计等核心问题。

🚀 **对AI未来的看法:** 尽管对当前AI发展速度和部分应用持谨慎态度,Karpathy重申他对AI的长期发展持乐观立场。他认为,虽然技术挑战艰巨,但通过持续的研究、时间和更好的安全实践,这些问题是可以解决的,他认为十年是AGI发展的“乐观”时间线。

In a widely shared interview with podcaster Dwarkesh Patel, a YouTuber with over 1 million followers, Karpathy said he believes the race to build AGI is moving significantly slower than the hype suggests.

Despite rapid advances in large language models (LLMs) over the past three years, he argued AGI remains at least a decade away, and warned that many companies are exaggerating AI’s agentic capabilities in a way that could damage the field.

“Overall, the models are not there,” Karpathy said on the podcast. “I feel like the industry is making too big of a jump and is trying to pretend like this is amazing, and it’s not. It’s slop.”

The interview triggered an immediate reaction across the tech community, where expectations for AGI have soared alongside capital investment and competition. 

“If this Karpathy interview doesn’t pop the AI bubble, nothing will,” Prithvir Jhaveri, CEO of prediction markets aggregator TradeFox, wrote on X.

John Coogan, host of the tech podcast TBPN, noted that Karpathy’s interview came just weeks after AI pioneer Richard Sutton called LLMs a “dead end.” 

“The general tech community is experiencing whiplash right now,” Coogan wrote on X.

Karpathy, who previously served as Tesla’s senior director of AI and helped lead OpenAI in its early years, described his AI timeline as “five to ten times pessimistic” compared to many public predictions. But he rejected the idea his prediction, that it’ll take a decade to achieve AGI, is gloomy. “Ten years,” he wrote on X after the interview, “should otherwise be a very bullish timeline for AGI.”

For Silicon Valley, it’s a slow projection. Sam Altman, Karpathy’s fellow OpenAI co-founder and current CEO, predicts artificial intelligence will surpass the intelligence of any human in any specialty by 2030. Elon Musk has predicted that AGI will come either this year or the next.

Karpathy argued that much of the confusion stems from metrics that give an inflated sense of capability. Public demos, benchmark competitions, chatbot conversations, and code-generation tests tend to reflect narrow optimizations, he said, rather than addressing the hardest unsolved problems in AI. Those include long-horizon planning, structured reasoning and, ultimately, safe system design. 

Karpathy reserved his strongest criticism for AI “agents,” a concept that has exploded across the industry in recent months. 

These systems, built on top of LLMs, are pitched as autonomous digital workers that can write and run code, search the internet, operate software, and execute business tasks with minimal oversight. Karpathy said the idea is promising, but the execution, at least how it stands today, is far from reliable. 

“We’re at this intermediate stage,” Karpathy said. “The models are amazing. They still need a lot of work.”

Many other AI leaders are much more bullish. For example, Nvidia CEO Jensen Huang has called 2025 “the year of AI agents.” Anthropic CEO Dario Amodei recently said that by 2026 or 2027, AI systems will be “better than almost all humans at almost all things.”

But most current AI agent systems produce brittle, unpredictable results and lack basic reliability, Karpathy warned. He argued they do not possess enough reasoning ability, have limited perceptions of software environments, and struggle to use tools correctly.

“If this isn’t done well,” Karpathy said, “we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities [and] security breaches.”

Still, he insisted AI remains on a long but solvable path. The technical challenges ahead are difficult, he said, but manageable with time, research, and better safety practices. 

“I feel like the problems are surmountable,” he said. “But they’re still difficult.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Andrej Karpathy AGI 大型语言模型 AI代理 人工智能 OpenAI AI发展速度 AI Agents Large Language Models Artificial Intelligence AI Development Pace
相关文章