Nvidia Developer 09月29日 23:10
NVIDIA Isaac Lab 2.3 升级:人形机器人能力增强与数据采集优化
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

NVIDIA Isaac Lab 2.3 早期开发者预览版在人形机器人能力方面进行了显著提升,包括先进的全身控制、增强的模仿学习以及更优化的运动能力。此次更新还通过支持更多设备(如 Meta Quest VR 和 Manus 手套)扩展了远程操作功能,以加速演示数据集的创建。此外,引入了基于运动规划的工作流,用于在操作任务中生成数据。新版本还提供了更丰富的强化学习和模仿学习示例,并改进了操作任务中的远程操作体验,包括对 Unitree G1 机器人的支持。同时,SkillGen 工作流的引入使得生成自适应、无碰撞的操作演示更加便捷。Isaac Lab 2.3 还为移动机器人提供了端到端的导航能力,并通过 Loco-manipulation 工作流合成机器人运动和操作相结合的演示数据。最后,NVIDIA Isaac Lab – Arena 作为开源策略评估框架,旨在简化大规模模拟评估过程。

🤖 **增强人形机器人能力与运动控制:** Isaac Lab 2.3 带来了先进的全身控制技术,显著提升了人形机器人的运动协调性和灵活性。这包括优化的运动能力,使得机器人在执行复杂任务时更加平稳和高效。同时,对上身控制的改进,如通过改进的 Pink IK 控制器,确保了双臂机器人的手臂保持更自然姿势,减少了不必要的肘部外翻,并增加了机器人的可达空间,使其能够执行更广泛的操作。

🎮 **扩展远程操作与数据采集:** 为了加速演示数据集的创建,Isaac Lab 2.3 扩展了远程操作功能,支持更多设备,如 Meta Quest VR 和 Manus 手套。这使得数据采集过程更加便捷和高效,能够收集到更丰富、更具代表性的演示数据,为训练更强大的机器人策略奠定基础。此外,还为操作任务引入了基于运动规划的工作流,进一步优化了数据生成过程。

💡 **改进操作任务的模拟与训练:** 新版本在操作任务方面提供了诸多改进,包括支持更精细的模仿学习,例如通过 SkillGen 工作流生成自适应、无碰撞的操作演示,克服了以往方法的局限性。还扩展了基准测试环境,支持吸盘夹具,实现了吸盘和传统夹具的混合操作。同时,强化学习支持也得到了增强,引入了字典观察空间以及 ADR 和 PBT 等技术,以更好地支持 RL 训练的扩展性。

🌐 **端到端导航与 loco-manipulation 数据生成:** Isaac Lab 2.3 扩展了移动机器人的能力,提供了端到端的导航工作流,支持 NVIDIA COMPASS,从而实现跨不同机器人类型和环境的导航。此外,还推出了 Loco-manipulation 工作流,用于合成机器人运动和操作相结合的演示数据。该工作流通过整合导航与全身控制器,能够生成复杂任务的演示,如在移动过程中抓取和放置物体,为训练能够执行复杂序列的机器人提供了重要支持。

📈 **推出开源策略评估框架:** 为解决机器人技能评估的挑战,NVIDIA 和 Lightwheel 共同开发了 NVIDIA Isaac Lab – Arena,这是一个开源的策略评估框架。该框架通过简化任务定义、提供可扩展的评估库和支持大规模并行 GPU 加速评估,使得开发者能够更轻松、更高效地进行大规模模拟实验,从而加速机器人研究和开发。

Training robot policies from real-world demonstrations is costly, slow, and prone to overfitting, limiting generalization across tasks and environments. A sim-first approach streamlines development, lowers risk and cost, and enables safer, more adaptable deployment. 

The latest version of Isaac Lab 2.3, in early developer preview, improves humanoid robot capabilities with advanced whole-body control, enhanced imitation learning, and better locomotion. The update also expands teleoperation for data collection by supporting more devices, like Meta Quest VR and Manus gloves, to accelerate the creation of demonstration datasets. Additionally, it includes a motion planner-based workflow for generating data in manipulation tasks.

New reinforcement and imitation learning samples and examples

Isaac Lab 2.3 offers new features that support dexterous manipulation tasks, including dictionary observation space for perception and proprioception, and Automatic Domain Randomization (ADR) and Population Based Training (PBT) techniques to enable better scaling for RL training. These new features extend on environments implemented in DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training and Visuomotor Policies to Grasp Anything with Dexterous Hands.

To launch training for the dexterous environment, use the following script:

./isaaclab.sh -p -m torch.distributed.run --nnodes=1 --nproc_per_node=4 scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Dexsuite-Kuka-Allegro-Reorient-v0 --num_envs 40960 --headless --distributed

Expanding on prior releases, Isaac Lab 2.3 introduces new benchmarking environments with suction grippers, enabling manipulation across both suction and traditional gripper setups. The previous version included a surface gripper sample in the direct workflow. This update adds CPU-based surface gripper support to the manager-based workflow for imitation learning. 

To record demonstrations with this sample, use the following command: 

./isaaclab.sh -p scripts/tools/record_demos.py --task Isaac-Stack-Cube-UR10-Long-Suction-IK-Rel-v0 --teleop_device keyboard --device cpu

For more details, see the tutorial on interacting with a surface gripper

Improved teleoperation for dextrous manipulation

Teleoperation in robotics is the remote control of a real or simulated robot by a human operator with an input device over a communication link, enabling remote manipulation and locomotion control. 

Isaac Lab 2.3 includes teleoperation support for the Unitree G1 robot, with dexterous retargeting for both the Unitree three-finger hand and the Inspire five-finger hand

Dexterous retargeting is the process of translating human hand configurations to robot hand joint positions for manipulation tasks. This allows efficient, human‑to‑robot skill transfer, improves performance on contact‑rich in‑hand tasks, and yields rich demonstrations to train robust manipulation policies. 

The dextrous retargeting workflow takes advantage of the retargeter teleoperation framework built into Isaac Lab which enables per-task teleoperation device configuration. 

Additional improvements have also been made to upper body control across all bimanual robots, like the Fourier GR1T2 and the Unitree G1. This has been done by improving the Pink IK (Inverse Kinematics) controller to keep bimanual robot arms in a more natural posture, reducing unnecessary elbow flare. New environments that allow for the robot to rotate its torso have been included in this release, to increase robots’ reachable space. Additional tuning has been done to improve speed and reduce errors in the end effector and end effector goal.

Video 1. A standing environment manipulation task with G1 in Isaac Lab
Figure 1. Reachable space before improvements to the IK controller

The Isaac Lab 2.3 release additionally includes UI enhancements for more intuitive usage. UI elements have been added to alert teleoperators of inverse kinematic (IK) controller errors, like at-limit joints and no-solve states. A pop-up has also been added to inform teleoperators when demonstration collection has concluded. 

Introducing collision-free motion planning for manipulation data generation

SkillGen is a workflow for generating adaptive, collision-free manipulation demonstrations. It combines human-provided subtask segments with GPU-accelerated motion planning to enable learning real-world contact-rich manipulation tasks from a handful of human demonstrations. 

Developers can use SkillGen within Isaac Lab Mimic to generate demonstrations in this latest version of Isaac Lab. SkillGen enables multiphase planning (approach, contact, retreat), supports dynamic object attachment and detachment with appropriate collision sphere management, and synchronizes the world state to respect kinematics and obstacles during skill stitching. Manual subtask “start” and “end” annotations separate contact-rich skills from motion planning segments, ensuring consistent trajectory synthesis for downstream users and reproducible results.

In previous releases, Isaac Lab Mimic used the MimicGen implementation for data generation. SkillGen has improved on limitations in MimicGen, and the Isaac Lab 2.3 release now enables you to use either SkillGen or MimicGen inside Isaac Lab Mimic.

To run the pipeline using a pre-annotated dataset for two stacking tasks, use the following commands. You can also download the dataset.

Use the following command for launching the vanilla cube stacking task:

./isaaclab.sh -p scripts/imitation_learning/isaaclab_mimic/generate_dataset.py \--device cpu \--num_envs 1 \--generation_num_trials 10 \--input_file ./datasets/annotated_dataset_skillgen.hdf5 \--output_file ./datasets/generated_dataset_small_skillgen_cube_stack.hdf5 \--task Isaac-Stack-Cube-Franka-IK-Rel-Skillgen-v0 \--use_skillgen

Use the following command for launching the cube stacking in a bin task:

./isaaclab.sh -p scripts/imitation_learning/isaaclab_mimic/generate_dataset.py \--device cpu \--num_envs 1 \--generation_num_trials 10 \--input_file ./datasets/annotated_dataset_skillgen.hdf5 \--output_file ./datasets/generated_dataset_small_skillgen_bin_cube_stack.hdf5 \--task Isaac-Stack-Cube-Bin-Franka-IK-Rel-Mimic-v0 \--use_skillgen
Figure 3. Data generation with perturbations for the adaptive bin stacking task using SkillGen in Isaac Lab

For information about prerequisites and installation, see SkillGen for Automated Demonstration Generation. For policy training and inference, refer to the Imitation Learning workflow in Isaac Lab. For details about commands, see the SkillGen documentation.

End-to-end navigation for mobile robots

Beyond manipulation, humanoids and mobile robots must navigate complex and dynamic spaces safely. Developers can now use the mobility workflow in Isaac Lab to post-train NVIDIA COMPASS, a vision-based mobility pipeline enabling navigation across robot types and environments. The workflow involves synthetic data generation (SDG) in Isaac Sim, mobility model training, and deployment with NVIDIA Jetson Orin or NVIDIA Thor. Cosmos Transfer improves synthetic data to reduce the sim-to-real gap.

By combining NVIDIA Isaac CUDA-accelerated libraries, a robot can know its location using cuVSLAM, learn how to build a map using cuVGL and understand the scene to generate action with COMPASS, enabling it to navigate changing environments and obstacles in real time. COMPASS also provides developers means to generate synthetic data for training advanced Vision Language Action (VLA) foundation models like GR00T N. ADATA, UCR and Foxlink are integrating COMPASS into their workflows.

Loco-manipulation synthetic data generation for humanoids

Loco-manipulation is the coordinated execution of locomotion and manipulation—robots move their bodies (walk or roll) while simultaneously acting on objects (grasping, pushing, pulling), treated as one coupled whole-body task.

This workflow synthesizes robot task demonstrations that couple manipulation and locomotion by integrating navigation with a whole-body controller (WBC). This enables robots to execute complex sequences, such as picking up an object from a table, traversing a space, and placing the object elsewhere.

The system augments demonstrations by randomizing tabletop pick and place locations, destinations, and ground obstacles. The process restructures data collection into pick and place segments separated by locomotion, enabling large-scale loco-manipulation datasets from manipulation-only human demonstrations to train humanoid robots for combined tasks. 

An example of how to run this augmentation is shown below. Download the sample input dataset.

./isaaclab.sh -p \\scripts/imitation_learning/disjoint_navigation/generate_navigation.py \\--device cpu \\--kit_args="--enable isaacsim.replicator.mobility_gen" \\--task="Isaac-G1-Disjoint-Navigation" \\--dataset ./datasets/generated_dataset_g1_locomanip.hdf5 \\--num_runs 1 \\--lift_step 70 \\--navigate_step 120 \\--enable_pinocchio \\--output_file ./datasets/generated_dataset_g1_navigation.hdf5

The interface is flexible for users to switch to different embodiments, such as humanoids and mobile manipulators with the controller users choose. 

Figure 4. Loco-manipulation SDG for augmenting navigation and manipulation trajectories

Policy evaluation framework 

Evaluating learned robot skills—such as manipulating objects or traversing a space—does not scale when limited to real hardware. Simulation offers a scalable way to evaluate these skills against a multitude of scenarios, tasks and environments. 

However, from sampling simulation-ready assets, to setting up and diversifying environments, to orchestrating and analyzing large-scale evaluations, users need to hand-curate several components on top of Isaac Lab to achieve desired results. This leads to fragmented setups with limited scalability, high overhead, and a significant entry barrier. 

To address this problem, NVIDIA and Lightwheel are co-developing NVIDIA Isaac Lab – Arena, an open source policy evaluation framework for scalable simulation-based experimentation. Using the framework APIs, developers can streamline and execute complex, large-scale evaluations without system-building. This means they can focus on policy iteration while contributing evaluation methods to the community, accelerating robotics research and development.

This framework provides simplified, customizable task definitions and extensible libraries for metrics, evaluation and diversification. It features parallelized, GPU-accelerated evaluations using Isaac Lab and interoperates with data generation, training, and deployment frameworks for a seamless workflow. 

Built on this foundation is a library of sample tasks for manipulation, locomotion and loco-manipulation. NVIDIA is also collaborating with policy developers and benchmark authors, as well as simulation solution providers like Lightwheel, to enable their evaluations on this framework, while contributing evaluation methods back to the community.

Figure 5. Isaac Lab – Arena, Policy Evaluation Framework, and Sample Tasks enable scalable and accessible evaluations

For large‑scale evaluation, workloads can be orchestrated with NVIDIA OSMO, a cloud‑native platform that schedules and scales robotics and autonomous‑machine pipelines across on‑prem and cloud compute. Isaac Lab – Arena will be available soon. 

Infrastructure support

Isaac Lab 2.3 is supported on NVIDIA RTX PRO Blackwell Servers, and on NVIDIA DGX Spark, powered by the NVIDIA GB10 Grace Blackwell Superchip. Both RTX PRO and DGX Spark provide an excellent platform for researchers to experiment, prototype, and run every robot development workload across training, SDG, robot learning, and simulation.

Note that teleoperation with XR/AVP and Imitation Learning in Isaac Lab Mimic are not supported in Isaac Lab 2.3 on DGX Spark. Developers are expected to have precollected data for humanoid environments, while Franka environments support standard devices like the keyboard and spacemouse. 

Ecosystem adoption

Leading robotics developers Agility Robotics, Boston Dynamics, Booster Robotics, Dexmate, Figure AI, Hexagon, Lightwheel, General Robotics, maxon, and Skild AI are tapping NVIDIA libraries and open models to advance robot development. 

Get started with Isaac Lab 2.3

Isaac Lab 2.3 accelerates robot learning by enhancing humanoid control, expanding teleoperation for easier data collection, and automating the generation of complex manipulation and locomotion data. 

To get started with the early developer release of Isaac Lab 2.3, visit the GitHub repo and documentation

Learn more about the research being showcased at CoRL and Humanoids, happening September 27–October 2 in Seoul, Korea.

Also, join the 2025 BEHAVIOR Challenge, a robotics benchmark for testing reasoning, locomotion, and manipulation, featuring 50 household tasks and 10,000 tele-operated demonstrations.

Stay up to date by subscribing to our newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X, and Facebook. Explore NVIDIA documentation and YouTube channels, and join the NVIDIA Developer Robotics forum. To start your robotics journey, enroll in free NVIDIA Robotics Fundamentals courses.

Get started with NVIDIA Isaac libraries and AI models for developing physical AI systems.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

NVIDIA Isaac Lab 机器人 人工智能 机器学习 人形机器人 操作 导航 模拟 远程操作 数据采集 强化学习 模仿学习 NVIDIA Isaac Lab 2.3
相关文章