GenSim2:
Scaling Robotic Data Generation
with Multi-modal and Reasoning LLMs

Pu Hua1*, Minghuan Liu2,3 *, Annabella Macaluso2 *,
Yunfeng Lin3, Weinan Zhang3, Huazhe Xu1, Lirui Wang4†
Tsinghua University1, UCSD2, Shanghai Jiao Tong University3, MIT CSAIL4
* equal contribution. † project lead.

Conference on Robot Learning, 2024


TL;DR: GenSim2 uses multimodal LLMs to generate vast amounts of articulated, 6-dof robotic tasks in simulation for pre-training a generalist 3D multitask policies. The framework "amplifies" limited real world tasks and trajectories with foundation models.


Abstract

Robotic simulation today remains challenging to scale up due to the human efforts required to create diverse simulation tasks and scenes. Simulation-trained policies also face scalability issues as many sim-to-real methods focus on a single task. To address these challenges, this work proposes GenSim2, a scalable framework that leverages coding multi-modal LLMs for complex and realistic simulation task creation, including long-horizon tasks with articulated objects. To automatically generate demonstration data for these tasks at scale, we propose planning and RL solvers that generalize within object categories. The pipeline can generate data for up to 100 articulated tasks with 200 objects and reduce the required human efforts by over 50%. To utilize such data, we propose a simple yet effective multi-task language-conditioned policy architecture, dubbed proprioceptive point-cloud transformer (PPT), that learns from the generated demonstrations and exhibits strong sim-to-real zero-shot transfer.

Combining the proposed pipeline and the policy architecture, we show a promising usage of GenSim2 that the generated data can be used for zero-shot transfer or co-train with real-world collected data, which significantly enhances the policy performance by 20% compared with training exclusively on limited real data.

Generated Task Library

Primitive Tasks

Task
instance


Long-horizon Tasks

Task
instance


Real-Robot Experiments

Real Only

Task
instance


Sim+Real

Task
instance



Compared to using only 10 real-world trajectories, incorporating generated simulation data enhances the generalization of real-world policies across multiple tasks. Tasks shown here are executed using a multi-task policy.


▶ Task Generation Ablation Experiment
▶ Real World Experiment

GenSim2 Framework

Interpolate start reference image.


The GenSim2 framework consists of (1) task proposal, (2) solver creation, (3) multi-task training, and (4) generalization and sim-to-real transfer.

GenSim2 Solver Generation Pipeline

Interpolate start reference image.


Multi-modal task solver generation pipeline that utilizes GPT-4 and optimization configurations for scalable manipulation task solutions.

Proprioceptive Pointcloud Transformer

Interpolate start reference image.

The proposed Proprioception Point cloud Transformer (PPT) policy architecture maps language, point cloud, and proprioception inputs in a shared latent space for action prediction.

Planner Overview

Interpolate start reference image.


We demonstrate how to leverage the keypoint planner to solve the OpenBox task. Initially, constraints are defined to ensure the gripper contacts the box lid. Based on this actuation pose, specific motions are assigned to complete the task of opening the box.

BibTeX

      
      @inproceedings{hua2024gensim2,
      author    = {Pu Hua, Minghuan Liu, Annabella Macaluso, Yunfeng Lin, Weinan Zhang, Huazhe Xu, Lirui Wang},
      title     = {GenSim2: Scaling Robot Data Generation with Multi-modal and Reasoning LLMs},
      booktitle = {Conference on Robot Learning},
      year      = {2024}}
    
    


Acknowledgement


We would like to thank Professor Xiaolong Wang for his kind support and discussion of this project. We thank Yuzhe Qin and Fanbo Xiang for their generous help in SAPIEN development. We thank Mazeyu Ji for his help on real-world experiments. Many ideas are inspired by GenSim.