Nvidia Developer 09月03日
快速构建真实场景模拟
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了如何使用NVIDIA Omniverse NuRec和3DGUT技术,从简单的传感器数据中重建逼真的3D场景,并在NVIDIA Isaac Sim或CARLA模拟器中即时部署。文章详细阐述了从捕捉真实世界数据、训练重建模型到加载结果到模拟器中的步骤,为机器人训练和自动驾驶汽车开发提供了高效的解决方案。

📸 使用Omniverse NuRec和3DGUT技术,可以从简单的传感器数据中快速重建逼真的3D场景,为机器人训练和自动驾驶汽车开发提供高效的解决方案。

🌐 该方法支持将重建的场景直接导入NVIDIA Isaac Sim或CARLA模拟器,实现真实环境中的交互式模拟,提高模拟到现实的迁移效率。

🔍 文章详细介绍了从捕捉真实世界数据、训练重建模型到加载结果到模拟器中的五个步骤,包括使用COLMAP进行稀疏重建和3DGUT进行密集重建。

🚗 对于自动驾驶汽车开发,Omniverse NuRec与CARLA模拟器的集成可以实现真实世界驾驶场景的回放,进行更全面的测试和验证。

🌟 此外,NVIDIA Cosmos Transfer技术可以进一步增强重建场景,通过生成多样化的环境、光照条件和天气场景,丰富模拟环境,提高测试的严谨性和真实性。

Turning real-world environments into interactive simulation no longer requires days or weeks of work. With NVIDIA Omniverse NuRec and 3DGUT (3D Gaussian with Unscented Transforms), you can reconstruct photorealistic 3D scenes from simple sensor data and deploy them in NVIDIA Isaac Sim or CARLA Simulator—instantly.

This post walks you through how to capture real-world data, train a reconstruction, and load the results into Isaac Sim.

Video 1. NVIDIA Omniverse NuRec neural reconstruction libraries bring the real world into simulation, using multi-sensor data to achieve photorealistic environments essential for testing and validating robotics systems

How to create an interactive simulation from photos

Neural reconstruction enables efficient robot training in realistic simulations, improving sim-to-real transfer. The following steps streamline neural reconstruction and rendering into a recipe that works across different environments. 

Figure 1. Sample scene created with Omiverse NuRec for Robot training in simulation

Step 1: Capture the real-world scene

Capture approximately100 photos from all angles with good lighting and overlap between images to help with feature matching. Example specs: f/8, 1/100s+, 18 mm or similar).

Step 2: Generate sparse reconstruction with COLMAP

To generate a sparse point cloud and camera parameter, use COLMAP, a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline. You can achieve this through its GUI using automatic reconstruction, or by executing commands for feature extraction, feature matching, and sparse reconstruction. For compatibility with 3DGUT, select either the pinhole or simple pinhole camera model.

# Feature detection & extraction$ colmap feature_extractor \    --database_path ./colmap/database.db \    --image_path    ./images/ \    --ImageReader.single_camera 1 \    --ImageReader.camera_model   PINHOLE \    --SiftExtraction.max_image_size 2000 \    --SiftExtraction.estimate_affine_shape 1 \    --SiftExtraction.domain_size_pooling 1# Feature matching$ colmap exhaustive_matcher \    --database_path ./colmap/database.db \    --SiftMatching.use_gpu 1# Global SFM$ colmap mapper \    --database_path ./colmap/database.db \    --image_path    ./images/ \    --output_path   ./colmap/sparse# Visualize for verification$ colmap gui --import_path ./colmap/sparse/0 \    --database_path ./colmap/database.db \    --image_path    ./images/

Step 3: Train with 3DGUT for dense reconstruction

Use COLMAP outputs to train with 3DGUT and config apps/colmap_3dgut_mcmc.yaml.

$ conda activate 3dgrut$ python train.py --config-name apps/colmap_3dgut_mcmc.yaml \    path=/path/to/colmap/ \    out_dir=/path/to/out/ \    experiment_name=3dgut_mcmc \    export_usdz.enabled=true \    export_usdz.apply_normalizing_transform=true

Step 4: Export to USD and normalize

Once training completes, export your reconstructed scene as a USD file using these essential flags:

export_usdz.enabled=trueexport_usdz.apply_normalizing_transform=true

Check out a tutorial for an example script that can be run directly from the Script Editor or as a Standalone Application.

This creates a USD asset that integrates seamlessly with the Isaac Sim simulation ecosystem.

Step 5: Deploy the reconstructed scene

The USD assets generated through this pipeline can be loaded or referenced directly into Isaac Sim, just like any other USD asset. Simply use File > Import or drag-and-drop the USD file into the stage from the content browser. 

After loading the USD asset, a ground plane can be created inside of Isaac Sim for mobility simulation, as explained in Video 2.

Video 2. Step-by-step tutorial on how-to add a ground plane mesh and physics to the rendered scene while importing into Isaac Sim

Reconstructed scenes are also available on the NVIDIA Physical AI Dataset for quick import and immediate experimentation.

How to replay autonomous vehicle scenes in CARLA 

For autonomous vehicle (AV) development, Omniverse NuRec libraries integrated with the open source CARLA AV simulator open powerful possibilities. This is an experimental new feature and works with sample scenes that have already been reconstructed and available in the NVIDIA Physical AI Dataset.

Figure 2. Sample scene for Omniverse NuRec rendering workflow in CARLA

Step 1: Run CARLA and set up scripts

Select a scene from the Physical AI Dataset, then navigate to your CARLA directory and run the following script:

./PythonAPI/examples/nvidia/install_nurec.sh

Step 2: Replay the scene

Next, replay the Omniverse NuRec scenario using the following:

source carla/bin/activatecd PythonAPI/examples/nvidia/python example_replay_recording.py --usdz-filename /path/to/scenario.usdz

Step 3: Capture data

You can also capture data within the simulation for further testing. Image capture for dataset generation:

source carla/bin/activatecd PythonAPI/examples/nvidia/python example_save_images.py --usdz-filename /path/to/scenario.usdz --output-dir ./captured_images

This integration enables you to replay real-world drives in a controllable simulation environment, complete with all the actors and dynamics of the original scene. 

How to enhance reconstructed scenes furtherWant to take your reconstructed scenes even further? NVIDIA Cosmos Transfer, a multi-controlnet world foundation model, amplifies robotics and AV simulation by enabling precise, controllable video generation. Use Cosmos Transfer to synthesize diverse environments, lighting conditions, and weather scenarios. You can also dynamically add and edit objects using multimodal controls like segmentation, depth maps, HD maps, and more.

Figure 3. Cosmos Transfer amplifies scene diversity by adding new weather, lighting, and terrain conditions

This approach streamlines scenario-rich dataset creation, reduces manual effort, and ensures rigorous, photorealistic validation. With Cosmos Transfer-1 distilled to reduce 70 diffusion steps, you can generate photorealistic controllable video in under 30 seconds. Building on these performance improvements, Cosmos Transfer-2 is coming soon to further accelerate synthetic data generation (SDG) for AV development.

Why Gaussian-based rendering accelerates simulation workflows

3D Gaussians represent a transformative leap in how the real world is reconstructed and simulated for robotics and autonomous vehicles. By streamlining the path from data capture to photorealistic, interactive environments, Omniverse NuRec libraries leverage Gaussian-based rendering to dramatically accelerate simulation workflows for scalable, robust testing.

The combination of the COLMAP proven structure-from-motion pipeline with 3DGUT advanced rendering capabilities creates a robust foundation that handles complex real-world scenarios—from challenging lighting conditions to intricate camera distortions—that would stump traditional reconstruction methods.

Get started rendering real-world scenes in interactive simulation

Whether you’re a researcher aiming to push the boundaries of sim-to-real transfer, or an engineer seeking efficient, high-fidelity scene generation, these advances empower you to rapidly iterate and confidently deploy solutions grounded in real-world complexity.

Ready to get started? 

The future of physical AI simulation is here, and it’s more accessible than ever. Start building richer, more realistic digital twins for tomorrow’s intelligent machines.

Watch the NVIDIA Research special address at SIGGRAPH.

Stay up to date by subscribing to NVIDIA news and following NVIDIA Omniverse on Discord and YouTube.

Get started with developer starter kits to quickly develop and enhance your own applications and services.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Omniverse NuRec 3DGUT 3D重建 机器人训练 自动驾驶汽车 Isaac Sim CARLA 模拟器
相关文章