Nvidia Blog 09月25日
NVIDIA推动AI创新
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

下周的Hot Chips会议将聚焦AI推理和系统架构。NVIDIA将展示其最新技术,包括ConnectX-8 SuperNIC、Blackwell架构和CPO交换机,以推动数据中心规模AI的发展。这些技术通过低延迟、高带宽通信和高效的光纤传输,加速AI推理,支持从服务器到超工厂的全面计算。NVIDIA还强调与开源社区的合作,通过TensorRT-LLM和Dynamo等框架优化LLM和分布式推理。

🔹 NVIDIA ConnectX-8 SuperNIC通过低延迟、高带宽的多GPU通信,实现市场领先的AI推理性能,支持数据中心规模计算,并与NVLink、NVLink Switch和NVLink Fusion协同工作,提供超低延迟、高带宽的数据交换。

🔸 NVIDIA Blackwell架构,包括GeForce RTX 5090 GPU,通过DLSS 4技术提升游戏性能,并支持神经渲染,增强计算机图形和模拟的真实性,同时NVFP4低精度数值格式优化LLM推理效率。

🌐 NVIDIA Spectrum-XGS Ethernet技术通过CPO交换机和光纤传输,实现高效、高性能的AI工厂,并支持跨多个分布式数据中心的AI超级工厂,推动giga-scale intelligence发展。

🚀 NVIDIA与开源社区合作,通过TensorRT-LLM、Dynamo等框架优化LLM和分布式推理,支持FlashInfer、PyTorch等流行框架,并推出NIM微服务,简化OpenAI等模型的部署和管理。

AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference.

A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market.

At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers.

In addition, NVIDIA experts will present at four sessions and one tutorial detailing how:

It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale.

NVIDIA Networking Fosters AI Innovation at Scale

AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently.

In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit.

NVIDIA ConnectX-8 SuperNIC

Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale.

As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange.

NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence.

Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet.

At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk.

NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads.

An NVIDIA rack-scale system.

Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance.

NVIDIA Blackwell and CUDA Bring AI to Millions of Developers

The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology.

NVIDIA GeForce RTX 5090 GPU

It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects.

NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere.

Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon.

From Algorithms to AI Supercomputers — Optimized for LLMs

NVIDIA DGX Spark

Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries.

As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models (LLMs). Learn more about NVFP4 in this NVIDIA Technical Blog.

Open-Source Collaborations Propel Inference Innovation

NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows.

Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others.

Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure.

Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.

 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

NVIDIA AI创新 数据中心 ConnectX-8 SuperNIC Blackwell架构 Hot Chips会议
相关文章