Nvidia Developer 09月29日
NVIDIA Newton:赋能机器人学习与仿真的新引擎
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

NVIDIA Newton是一款开源、可扩展的物理引擎,旨在加速机器人学习与开发。它通过与NVIDIA Warp和OpenUSD结合,为机器人提供了更高的精度、速度和可扩展性来处理复杂任务。Newton能够无缝集成MuJoCo Playground和Isaac Lab等机器人学习框架,并提供一个统一的Newton Solver API,支持多种物理引擎,如MuJoCo Warp。该引擎支持GPU加速,显著提升了模拟性能,并致力于弥合仿真与现实之间的鸿沟(sim-to-real gap)。文章详细介绍了如何使用Newton在Isaac Lab中训练四足机器人行走策略,以及如何设置多物理场仿真,例如一个工业机械臂折叠衣物,展示了Newton在处理刚体与可变形物体交互方面的强大能力。

🚀 **Newton物理引擎的核心价值**:Newton是一个开源、可扩展的物理引擎,由NVIDIA、Google DeepMind和Disney Research共同开发,旨在推动机器人学习和开发。它基于NVIDIA Warp和OpenUSD构建,能够让机器人更精确、更快速、更灵活地学习和执行复杂任务,并有效解决仿真与现实之间的“sim-to-real gap”问题。

💡 **强大的仿真能力与集成性**:Newton通过统一的Solver API,能够轻松集成多种物理引擎(如MuJoCo Warp、Disney Research Kamino solver等),并支持Tensor-based API,与PyTorch和NumPy兼容,便于与Isaac Lab等机器人学习框架高效集成。此外,它还提供了Newton Visualizer,可在不影响性能的情况下监控训练过程。

⚙️ **多物理场仿真与灵活性**:Newton支持独立的Python接口,能够进行多物理场仿真,即在同一框架内模拟刚体(如机械臂)与可变形物体(如衣物)的耦合交互。通过为不同组件分配专门的求解器(如Featherstone用于机器人,VBD用于布料),并控制仿真循环,用户可以实现对物理系统交互的精细控制,为更现实的机器人设计和任务优化提供支持。

🌐 **开放生态与广泛应用**:Newton以Apache 2.0许可发布,鼓励社区贡献和扩展。目前,已有包括ETH Zurich Robotic Systems Lab、Peking University和Style3D在内的多家机构和公司将其集成到各自的研究和开发流程中,涵盖了从土方工程、四足机器人越野到触觉感知和高保真服装模拟等多个领域,展示了其在推动机器人技术进步方面的巨大潜力。

Physics plays a crucial role in robotic simulation, providing the foundation for accurate virtual representations of robot behavior and interactions within realistic environments. With these simulators, researchers and engineers can train, develop, test, and validate robotic control algorithms and prototype designs in a safe, accelerated, and cost-effective manner. 

However, simulation often fails to match reality, a problem known as the sim-to-real gap. Robotics developers need a unified, scalable, and customizable solution to model real-world physics, including support for different types of solvers. 

This post walks you through how to train a quadruped robot to move from one point to another and how to set up a multiphysics simulation with an industrial manipulator to fold clothes. This tutorial uses Newton within NVIDIA Isaac Lab

What is Newton? 

Newton is an open source, extensible physics engine being developed by NVIDIA, Google DeepMind, and Disney Research, and managed by the Linux Foundation, to advance robot learning and development.

Built on NVIDIA Warp and OpenUSD, Newton enables robots to learn how to handle complex tasks with greater precision, speed, and extensibility. Newton is compatible with robot learning frameworks such as MuJoCo Playground and Isaac Lab. The Newton Solver API provides an interface for different physics engines, including MuJoCo Warp, to operate on the tensor-based data model, allowing easy integration with training environments in Isaac Lab.

Figure 1. Newton is a standalone Python package that provides GPU-accelerated interfaces for describing the physical model and state of robotic systems

At the core of Newton are the solver modules for numerical integration and constraint solving. Solvers may be constraint- or force-based, use direct or iterative methods, and may use maximal or reduced coordinate representations. 

The use of a common interface and shared data model mean that whether you run MuJoCo Warp, the Disney Research Kamino solver, or a custom solver, you interact with Newton consistently. This modular approach also lets you re-use collision handling, inverse kinematics, state management, and time-stepping logic without rewriting application code.

For training, Newton provides a tensor-based API that exposes physics states as PyTorch- and NumPy-compatible arrays, enabling efficient batching and seamless integration with robot learning frameworks such as Isaac Lab. Through the Newton Selection API, training scripts can query joint states, apply actions, and feed results back into learning algorithms—all through a single, consistent interface.

MuJoCo Warp, developed by Google DeepMind, is fully integrated as a Newton solver and also powers MJX and Playground in the DeepMind stack. This enables models and benchmarks to move seamlessly across Newton, Isaac Lab, and MuJoCo environments with minimal friction. 

Finally, Newton and its associated solvers are released under the Apache 2.0 license, ensuring the community can adopt, extend, and contribute.

What are the highlights of the Newton Beta release?

Highlights of the Newton Beta release include: 

    MuJoCo Warp, the main Newton solver, is up to 152x faster for locomotion and 313x for manipulation than MJX on GeForce RTX 4090. The NVIDIA RTX PRO 6000 Blackwell Series adds up to 44% more speed for MuJoCo Warp and 75% for MJX.Used as the next-generation Isaac Lab backend, Newton Beta achieves up to 65% faster in-hand dexterous manipulation with MuJoCo Warp versus PhysX.Extended performance and stability of Vortex Block Descent (VBD) solver for thin deformables such as clothing as well as implicit Material Point Method (MPM) solver for granular materials. 

How to train a locomotion policy for a quadruped using Newton in Isaac Lab

The new Newton physics engine integration in Isaac Lab unlocks a faster, more robust workflow for robotics research and development. 

This section showcases an end-to-end example of training a quadruped locomotion policy, validating its performance across simulators, and preparing it for real-world deployment. We’ll use the ANYmal robot as our case study to demonstrate this powerful train, validate, and deploy process.

Step 1: Train a locomotion policy with Newton

The first step is to set up the repository and train a policy from scratch using one of the Reinforcement Learning scripts in Isaac Lab. This example trains the ANYmal-D robot to walk on flat rigid terrain using the rsl_rl framework. GPU parallelization enables training across thousands of simultaneous environments for rapid policy convergence.

To start training in headless mode for maximum performance, run the following command:

./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Velocity-Flat-Anymal-D-v0 --num_envs 4096 --headless

With the Newton Beta release, you can now use the new lightweight Newton Visualizer to monitor training progress without the performance overhead of the full Omniverse GUI. Simply add the --newton_visualizer flag:

./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Velocity-Flat-Anymal-D-v0 --num_envs 4096 --headless --newton_visualizer

After training, you’ll have a policy checkpoint (.pt file) ready for the next stage.

Figure 2. Time-lapse of RL training visualized using the Newton Visualizer

Step 2: Validate the policy with Sim2Sim transfer

Sim2Sim transfer is a critical sanity check to ensure a policy is not overfit to a single physics engine’s specific characteristics. A policy that can successfully transfer between simulators, like PhysX and Newton, has a much higher chance of working on a physical robot.

A key challenge is that different physics engines may parse a robot’s USD and order its joints differently. We solve this by remapping the policy’s observations and actions using a simple YAML mapping file.

To run a policy trained in Newton with PhysX-based Isaac Lab, use the provided transfer script:

./isaaclab.sh -p scripts/newton_sim2sim/rsl_rl_transfer.py \    --task=Isaac-Velocity-Flat-Anymal-D-v0 \    --num_envs=32 \    --checkpoint <PATH_TO_POLICY_CHECKPOINT> \    --policy_transfer_filescripts/sim2sim_transfer/config/newton_to_physx_anymal_d.yaml

This transfer script is available through the isaac-sim / IsaacLab GitHub repo.

Video 1. An ANYmal-D policy trained with the Newton backend is successfully deployed in PhysX-based Isaac Lab, showing Sim2Sim transfer in action 

Step 3: Prepare for Sim2Real deployment

The final step of the workflow is to transfer the policy trained in simulation to a physical robot.

For this example, a policy was trained for the ANYmal-D robot entirely within the standard Isaac Lab environment using the Newton backend. The training process was intentionally limited to using only observations that would be available on the physical robot’s sensors, such as data from the IMU and joint encoders, (that is, no privileged information was used during training).

With the help of NVIDIA partners at ETH Zurich Robotic Systems Lab (RSL), this policy was then deployed directly to their physical ANYmal robot. The resulting hardware test showed the robot successfully executing a walking gait, demonstrating a direct pathway from training in Isaac Lab to testing on a real-world system (Video 2).

Video 2. A physical ANYmal robot executing a walking gait commanded by a policy trained entirely in the experimental version of Isaac Lab with the Newton backend and transferred directly to hardware

This complete train, validate, and deploy process demonstrates how Newton enables the path from simulation to real-world robotics success.

Multiphysics with the Newton standalone engine

Multiphysics simulation captures coupled interactions between rigid bodies (robot hands, for example) and deformable objects (cloth, for example) within a single framework. This enables more realistic evaluation and data-driven optimization of robot design, control, and task performance.

While Newton works with Isaac Lab, developers can use it directly from Python in standalone mode to experiment with complex physical systems. 

This walkthrough showcases a key feature of Newton: Simulating mixed systems with different physical properties. We’ll explore an example of a rigid robot arm manipulating a deformable cloth, highlighting how the Newton API enables you to easily combine multiple physics solvers in a single, real-time simulation.

Step 1: Launch the interactive demo

Newton comes with a suite of examples that are easy to run. The Franka robot arm and cloth demo can be launched with a single command from the root of the Newton repository.

First, ensure your environment is set up:

# Set up the uv environment for running Newton examplesuv sync --extra examples

Now, run the cloth manipulation example:

# Launch the Franka arm and cloth demouv run -m newton.examples cloth_franka

This opens an interactive viewer where you can watch the GPU-accelerated simulation in real time. The Franka-cloth demo features a GPU-based VBD Cloth solver. It runs at around 30 FPS on an RTX 4090, while guaranteeing penetration-free contact throughout the simulation. 

Compared to other GPU-based simulators that also enforce penetration-free dynamics—such as GPU-IPC (GPU-based Incremental Potential Contact solver)—this example achieves over 300x higher performance, making it one of the fastest fully penetration-free cloth manipulation demos currently available.

Video 3. The Newton standalone engine runs the cloth manipulation demo, combining rigid body and deformable physics. This visualization is rendered in NVIDIA Omniverse Kit

Step 2: Understanding the multiphysics coupling

This demo is a great example of multiphysics, where systems with different dynamical behaviors interact. This is achieved by assigning a specialized solver to each component. Looking at the example_cloth_franka.py file, you can see how the solvers are initialized:

# Initialize a Featherstone solver for the robotself.robot_solver = SolverFeatherstone(self.model, ...)# Initialize a Vertex-Block Descent (VBD) solver for the clothself.cloth_solver = SolverVBD(self.model, ...)

You can easily switch out the robot solver simply by changing SolverFeatherstone to some other solver that supports rigid body simulation, such as SolverMuJoCo.

The magic happens in the simulation loop, where these solvers are coordinated. This example uses a one-way coupling, where the rigid body affects the deformable, but not the other way around, which is acceptable in cloth manipulation use case where the effects of cloth on robot dynamics can be neglected. The simulation loop logic is straightforward:

    Update the cloth: The cloth_solver simulates the cloth’s movement, reacting to the collisions from the robot.Update the robot: The robot_solver advances the Franka arm’s state. The arm acts as a kinematic object.Detect collisions: The engine checks for collisions between the newly positioned robot and the cloth particles.
# A simplified view of the simulation loop in example_cloth_franka.pydef simulate(self):    for _step in range(self.sim_substeps):                # 1. Step the robot solver forward        self.robot_solver.step(self.state_0, self.state_1, ...)        # 2. Check for contacts between the robot and the cloth        self.contacts = self.model.collide(self.state_0, ...)        # 3. Step the cloth solver, passing in robot contact information        self.cloth_solver.step(self.state_0, self.state_1, ..., self.contacts, ...)

This explicit, user-controlled loop demonstrates the power of the Newton API, giving researchers fine-grained control over how different physical systems are coupled.

The team plans to extend Newton with deeper, more integrated coupling. This includes exploring two-way coupling, in scenarios where the dynamics effects each system has on the other is considerable—robot locomoting on deformable materials such as soil or mud, for example, where the soil can also exert forces back on rigid bodies in walking scenarios. The team is also envisioning implicit coupling for select solver combinations to more automatically manage the exchange of forces between systems.

How is the ecosystem adopting Newton? 

The Newton open ecosystem is rapidly expanding, with leading universities and companies integrating specialized solvers and workflows. From tactile sensing to cloth simulation and from dexterous manipulation to rough terrain locomotion, these collaborations highlight how Newton provides a common foundation for advancing robotic learning and bridging the sim-to-real gap.

The ETH Zurich Robotic Systems Lab (RSL) has been actively leveraging Newton for multiphysics simulation in earthmoving applications, particularly for heavy equipment automation. They use the Newton Implicit Material Point Method (MPM) solver to capture granular interactions such as soil, gravel, and stones colliding with rigid machinery. 

In parallel, ETH has applied Warp more broadly in robotics and graphics research, including differentiable simulation for deployable locomotion control, trajectory optimization with Gaussian splats (FOCI), and large-scale 3D garment modeling through the GarmentCodeData dataset.

Video 4. Newton is used to capture interaction of heavy machinery with a pile of granular material. Demo credit: Maximilian Krause, Lorenzo Terenzi and Lennart Werner from ETH Zurich

Lightwheel is actively contributing to Newton through SimReady asset development and solver optimization, particularly on deformables such as soil and cables in multiphysics scenarios. The demonstration below shows the Implicit MPM solver applied across a large environment to model ANYmal quadruped locomotion over non-rigid terrain composed of multiple materials.

Video 5. The ANYmal quadruped interacts with non-rigid terrain composed of multiple materials such as sand and gravel

Peking University (PKU) is extending Newton into tactile domains by integrating their IPC-based solver, Taccel, to simulate vision-based tactile sensing for robotic manipulators. By leveraging the Newton GPU-accelerated, differentiable architecture, PKU researchers can model fine-grained contact interactions that are critical for tactile and deformable manipulation.

Video 6. Taccel simulation of Tac-Man manipulation closely aligns with real-world execution, with only a small sim-real gap

Style3D is bringing its deep expertise in cloth and soft-body simulation to Newton, enabling high-fidelity modeling of garments and deformable objects with complex interactions. A simplified version of the Style3D solver has already been integrated into Newton, with plans to expose APIs that allow advanced users to run full-scale simulations involving millions of vertices.

Video 7. High-fidelity modeling of garments and deformable objects with complex interactions using Newton

Technical University of Munich (TUM) is leveraging Newton to run trained dexterous manipulation policies-validated on real robots back in simulation, marking an important first step toward closing the loop between sim and real. Also training of policies with 4,000 parallel environments in MuJoCo Warp already works. The next milestone is to transfer policies to hardware, before extending the framework to fine manipulation using a spatially resolved tactile skin. 

Read more on how the TUM AIDX Lab leveraged Warp to accelerate their robotics research on learning tactile in-hand manipulation agents. Learn more about how AIDX Lab is using Newton to advance their robot learning research.

Video 8. Newton is used to run trained dexterous manipulation policies validated on real robots—back in simulation

Get started with Newton 

The Newton physics engine delivers the simulation fidelity robotics researchers need, with a modular, extensible, and simulator‑agnostic design that makes it straightforward to couple diverse solvers for robot learning. 

As an open source, community‑driven project, developers can use, distribute, and extend Newton—adding custom solvers and contributing back to the ecosystem.

Learn more about the research being showcased at CoRL and Humanoids, happening September 27–October 2 in Seoul, Korea.

Also, join the 2025 BEHAVIOR Challenge, a robotics benchmark for testing reasoning, locomotion, and manipulation, featuring 50 household tasks and 10,000 tele-operated demonstrations.

Stay up to date by subscribing to our newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X, and Facebook. Explore NVIDIA documentation and YouTube channels, and join the NVIDIA Developer Robotics forum. To start your robotics journey, enroll in our free NVIDIA Robotics Fundamentals courses today.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Newton NVIDIA Isaac Lab 机器人仿真 物理引擎 机器学习 Sim-to-Real Multiphysics AI Robotics Simulation Physics Engine Machine Learning Newton NVIDIA Isaac Lab
相关文章