Fortune | FORTUNE 10月14日 00:13
OpenAI与博通合作,为AI数据中心扩容
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI正与芯片制造商博通达成一项重要协议,旨在为AI数据中心增加10吉瓦的算力。根据双方声明,OpenAI将负责硬件设计,博通则参与开发。此举旨在通过定制化处理器,将OpenAI在AI模型开发中的经验直接融入硬件,提升AI能力。该计划部署时间从2026年下半年开始,预计2029年底完成。此合作将为博通打开AI市场新局面,其股价因此上涨。尽管OpenAI在AI领域投入巨大,但目前尚未盈利,此次合作细节中未披露具体的支付方式。OpenAI此前已与英伟达和AMD达成类似协议,以缓解计算能力瓶颈。此次与博通的合作不涉及股权,与前两次的合作模式有所不同。分析认为,此举可能借鉴了谷歌通过定制芯片降低成本的成功经验。

🤝 **战略合作,扩容AI算力**:OpenAI与博通携手,计划在2026年下半年至2029年底,为AI数据中心增加10吉瓦的算力。OpenAI负责硬件设计,博通负责开发,目标是通过定制化处理器提升AI性能。

💡 **定制硬件,释放AI潜能**:通过将AI模型开发经验直接嵌入硬件,OpenAI寻求实现前所未有的AI能力和智能水平。这种深度集成有望带来更高的效率和性能。

📈 **市场影响与财务考量**:此次合作推动博通股价上涨,显示出市场对其AI业务前景的看好。然而,OpenAI作为一家尚未盈利的公司,其巨额算力投入的资金来源和支付方式仍是关注焦点。此次合作模式与此前与英伟达、AMD的协议有所不同,不涉及股权。

🌐 **长期愿景与挑战**:OpenAI联合创始人表示,10吉瓦的算力仅是实现通用人工智能(AGI)愿景的“一小部分”,并强调实现这一目标需要漫长的时间,可能需要数十年。此举是OpenAI为缓解计算瓶颈、实现其长期AI目标所做的多项努力之一。

As part of the pact, OpenAI will design the hardware and work with Broadcom to develop it, according to a joint statement on Monday. The plan is to add 10 gigawatts’ worth of AI data center capacity, with the companies beginning to deploy racks of servers containing the gear in the second half of 2026.

By customizing the processors, OpenAI said it will be able to embed what it has learned from developing AI models and services “directly into the hardware, unlocking new levels of capability and intelligence.” The hardware rollout should be completed by the end of 2029, according to the companies.

For Broadcom, the move provides deeper access to the booming AI market. Monday’s agreement confirms an arrangement that Broadcom Chief Executive Officer Hock Tan had hinted at during an earnings conference call last month.

Investors sent Broadcom shares up as as much as 11% on Monday, betting that the OpenAI alliance will generate hundreds of billions of dollars in new revenue for the chipmaker. But the details of how OpenAI will pay for the equipment aren’t spelled out. While the AI startup has shown it can easily raise funding from investors, it’s burning through wads of cash and doesn’t expect to be cash-flow positive until around the end of this decade.

OpenAI, the creator of ChatGPT, has inked a number of blockbuster deals this year, aiming to ease constraints on computing power. Nvidia Corp., whose chips handle the majority of AI work, said last month that it will invest as much as $100 billion in OpenAI to support new infrastructure — with a goal of at least 10 GW of capacity. And just last week, OpenAI announced a pact to deploy 6 GW of Advanced Micro Device Inc. processors over multiple years. 

As AI and cloud companies announce large projects every few days, it’s often not clear how the efforts are being financed. The interlocking deals also have boosted fears of a bubble in AI spending, particularly as many of these partnerships involve OpenAI, a fast-growing but unprofitable business.

While purchasing chips from others, OpenAI has also been working on designing its own semiconductors. They’re mainly intended to handle the inference stage of running AI models — the phase after the technology is trained.

There’s no investment or stock component to the Broadcom deal, OpenAI said, making it different than the agreements with Nvidia and AMD. An OpenAI spokesperson declined to comment on how the company will finance the chips, but the underlying idea is that more computing power will let the company sell more services.

A single gigawatt of AI computing capacity today costs roughly $35 billion for the chips alone, with 10 GW totaling upwards of $350 billion. But a chief reason OpenAI is working to develop its own chip is to bring down its costs, and it’s unclear what price Broadcom’s chips will command under the deal.

OpenAI might be trying to emulate Alphabet Inc.’s Google, which made its own chips using Broadcom’s technology and saw lower costs compared with other AI companies, such as Meta Platforms Inc., according to Bloomberg Intelligence analyst Mandeep Singh. Google’s success with Broadcom might have steered OpenAI to that chipmaker, rather than suppliers such as Marvell Technology Inc., Singh added.

In announcing the agreement, OpenAI CEO Sam Altman said that his company has been working with Broadcom for 18 months.

The startup is rethinking technology starting with the transistors and going all the way up to what happens when someone asks ChatGPT a question, he said on a podcast released by his company. “By being able to optimize across that entire stack, we can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models.”

When Tan referred to the agreement last month, he didn’t name the customer, though people familiar with the matter identified it as OpenAI. 

“If you do your own chips, you control your destiny,” Tan said in the podcast Monday.

Broadcom has increasingly been seen as a key beneficiary of AI spending, helping propel its share price this year. The stock was up 40% so far this year through Friday’s close, outpacing a 29% gain by the benchmark Philadelphia Stock Exchange Semiconductor Index. OpenAI, meanwhile, has garnered a $500 billion valuation, making it the world’s biggest startup by that measure. 

By tapping Broadcom’s networking technology, OpenAI is hedging its bets. Broadcom’s Ethernet-based options compete with Nvidia’s proprietary technology. OpenAI also will be designing its own gear as part of its work on custom hardware, the startup said. 

Broadcom won’t be providing the data center capacity itself. Instead, it will deploy server racks with custom hardware to facilities run by either OpenAI or its cloud-computing partners.

A single gigawatt is about the capacity of a conventional nuclear power plant. Still, 10 GW of computing power alone isn’t enough to support OpenAI’s vision of achieving artificial general intelligence, said OpenAI co-founder and President Greg Brockman.

“That is a drop in the bucket compared to where we need to go,” he said.

Getting to the level under discussion isn’t going to happen quickly, said Charlie Kawwas, president of Broadcom’s semiconductor solutions group. “Take railroads — it took about a century to roll it out as critical infrastructure. If you take the internet, it took about 30 years,” he said. “This is not going to take five years.”

Fortune Global Forum

returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business.

Apply for an invitation.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI Broadcom AI数据中心 算力 芯片 人工智能 深度学习 AI Data Center AI Capacity Chips Artificial Intelligence Deep Learning
相关文章