AI News 前天 08:18
OpenAI 拓展多云战略,加码算力供应
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI 正通过与 AWS 签署一项多年协议,进一步巩固其人工智能算力供应。此举是其多云战略的一部分,此前 OpenAI 已与微软和 Oracle 达成巨额合作。此次与 AWS 的 380 亿美元协议,尽管规模最小,但为 OpenAI 提供了数百万颗英伟达 GPU 和数千万颗 CPU 的使用权,以支持其前沿 AI 模型训练和 ChatGPT 等大规模推理工作负载。这笔巨额投资凸显了高性能 GPU 资源的稀缺性,以及企业在 AI 基础设施上进行长期资本承诺的必要性。OpenAI 的多云策略也为其他企业提供了借鉴,预示着单一云服务将不再是 AI 工作负载的首选,而 AI 预算也已成为企业资本规划的重要组成部分。

🚀 **算力供应的战略性扩张**: OpenAI 正通过与 AWS 签署一项价值 380 亿美元的多年协议,来确保其庞大的人工智能算力需求。这项合作是其整体多云战略的关键一环,旨在分散风险并保障对高性能计算资源的持续访问。此前,OpenAI 已分别与微软和 Oracle 达成了更为庞大的投资协议,总计投入超过 5800 亿美元,这表明了其对 AI 基础设施的巨大投入和长期规划。

🧠 **前沿 AI 模型与推理的基石**: 新协议将为 OpenAI 提供数百万颗英伟达 GPU(包括最新的 GB200 和 GB300 系列)以及数千万颗 CPU。这强大的计算能力不仅用于训练下一代 AI 模型,更是支撑当前 ChatGPT 等大规模应用进行高效推理的关键。OpenAI 联合创始人兼 CEO Sam Altman 强调,“扩展前沿 AI 需要海量、可靠的算力”,这笔投资正是为了满足这一需求。

💡 **企业 AI 基础设施的新趋势**: OpenAI 的巨额投入表明,高性能 GPU 已成为一种稀缺资源,需要大规模的长期资本承诺。这促使行业领导者认识到,对于大多数企业而言,“自建”AI 基础设施已不再可行。取而代之的是,转向如 Amazon Bedrock、Google Vertex AI 等由云服务商提供的托管平台,由云服务商承担基础设施的风险。同时,AI 工作负载的“多云”策略也日益凸显,以规避单一供应商的依赖风险,并确保业务的连续性。

OpenAI is on a spending spree to secure its AI compute supply chain, signing a new deal with AWS as part of its multi-cloud strategy.

The company recently ended its exclusive cloud-computing partnership with Microsoft. It has since allocated a reported $250 billion back to Microsoft, $300 billion to Oracle, and now, $38 billion to Amazon Web Services (AWS) in a new multi-year pact. This $38 billion AWS deal, while the smallest of the three, is part of OpenAI’s diversification plan.

For industry leaders, OpenAI’s actions show that access to high-performance GPUs is no longer an on-demand commodity. It is now a scarce resource requiring massive long-term capital commitment.

The AWS agreement provides OpenAI with access to hundreds of thousands of NVIDIA GPUs, including the new GB200s and GB300s, and the ability to tap tens of millions of CPUs.

This mighty infrastructure is not just for training tomorrow’s models; it’s needed to run the massive inference workloads of today’s ChatGPT. As OpenAI co-founder and CEO Sam Altman stated, “scaling frontier AI requires massive, reliable compute”.

This spending spree is forcing a competitive response from the hyperscalers. While AWS remains the industry’s largest cloud provider, Microsoft and Google have recently posted faster cloud-revenue growth, often by capturing new AI customers. This AWS deal is a plain attempt to secure a cornerstone AI workload and prove its large-scale AI capabilities, which it claims include running clusters of over 500,000 chips.

AWS is not just providing standard servers. It is building a sophisticated, purpose-built architecture for OpenAI, using EC2 UltraServers to link the GPUs for the low-latency networking that large-scale training demands.

“The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads,” said Matt Garman, CEO of AWS.

But “immediate” is relative. The full capacity from OpenAI’s latest cloud AI deal will not be fully deployed until the end of 2026, with options to expand further into 2027. This timeline offers a dose of realism for any executive planning an AI rollout: the hardware supply chain is complex and operates on multi-year schedules.

What, then, should enterprise leaders take from this?

First, the “build vs. buy” debate for AI infrastructure is all but over. OpenAI is spending hundreds of billions to build on top of rented hardware. Few, if any, other companies can or should follow suit. This pushes the rest of the market firmly toward managed platforms like Amazon Bedrock, Google Vertex AI, or IBM watsonx, where the hyperscalers absorb this infrastructure risk.

Second, the days of single-cloud sourcing for AI workloads may be numbered. OpenAI’s pivot to a multi-provider model is a textbook case of mitigating concentration risk. For a CIO, relying on one vendor for the compute that runs a core business process is becoming a gamble.

Finally, AI budgeting has left the realm of departmental IT and entered the world of corporate capital planning. These are no longer variable operational expenses. Securing AI compute is now a long-term financial commitment, much like building a new factory or data centre.

See also: Qualcomm unveils AI data centre chips to crack the Inference market

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post OpenAI spreads $600B cloud AI bet across AWS, Oracle, Microsoft appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AWS AI算力 多云战略 GPU 云计算 OpenAI AWS AI Compute Multi-Cloud Strategy GPU Cloud Computing
相关文章