Underwater Robot Simulation, Data Collection, and Evaluation Framework
U0env is a simulation and evaluation framework for underwater tasks, built on the Stonefish physics engine and ROS Noetic. It supports multi-task data collection, LeRobot dataset conversion, and closed-loop evaluation with Vision-Language-Action (VLA) models.
- π Realistic underwater simulation based on Stonefish physics engine
- π€ BlueROV2 + Alpha5 manipulator + Robotiq gripper integration
- π 20+ underwater manipulation tasks (grasping, navigation, scanning, etc.)
- π¦ Automated data collection with batch processing and parallel execution
- π One-click conversion to LeRobot dataset format
- π§ͺ Closed-loop evaluation with configurable metrics
- π― MoveIt-based motion planning support
u0env/
βββ cpp_env/ # C++ simulation environment (Stonefish)
βββ ros_ws/ # ROS workspace
β βββ src/
β βββ stonefish_ros/ # Stonefish ROS interface (submodule)
β βββ stonefish_bluerov2/ # BlueROV2 simulation package
β βββ description_alpha/ # Alpha5 manipulator descriptions
β βββ description_robotiq/# Robotiq gripper descriptions
βββ tools/
β βββ dataprocess/ # Data processing & conversion scripts
β βββ datavisualize/ # Visualization utilities
β βββ evalmetrics/ # Evaluation metrics
βββ Stonefish/ # Stonefish physics engine (submodule)
βββ batch_run.sh # Unified batch experiment script
βββ build_all.sh # Build script
βββ exp_setting.csv # Task configuration
This project depends on Stonefish and its ROS interface stonefish_ros. Please follow the steps below to complete the environment setup and compilation.
First, after cloning this repository, you need to initialize and update the Stonefish, stonefish_ros and lerobot submodules:
git submodule update --init Stonefish
git submodule update --init ros_ws/src/stonefish_ros
git submodule update --init tools/lerobotInstall Miniconda or Miniforge, and install robostack-noetic:
# To avoid accidental violation of the Anaconda ToS, remove the defaults channel:
conda config --remove channels defaults
## Optional
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
conda config --set channel_priority strict
# Create a ros-noetic desktop environment
conda create -n ros_env -c conda-forge -c robostack-noetic ros-noetic-desktop
# Activate the environment
conda activate ros_env
# Add the robostack channel to the environment
conda config --env --add channels robostack-noetic
# Installing tools for local development
conda install -c conda-forge ros-dev-tools
# Test
# First terminal
conda activate ros_env
roscore
# Second terminal
conda activate ros_env
rvizInstall stonefish, stonefish_ros and project dependencies:
conda activate ros_env
conda install -c conda-forge glm sdl2 freetype
conda install -c robostack-noetic -c conda-forge \
ros-noetic-ros-control \
ros-noetic-perception \
ros-noetic-moveit
pip install trimesh fast-simplification pandas matplotlibThe Reach Alpha5 manipulator description files are derived from the proprietary alpha_description package. You need to download it first and then run the generation script.
First, obtain the relevant license from Reach Robotics, then download alpha which contains the alpha_description package, and place it at:
ros_ws/src/description_alpha/origin/alpha_description/
The directory should contain: config/, meshes/, xacro/, CMakeLists.txt, package.xml, LICENSE.
conda activate ros_env
python tools/dataprocess/pkg_install.pyThis will generate:
ros_ws/src/description_alpha/config/- Controller configurationsros_ws/src/description_alpha/meshes/- STL mesh filesros_ws/src/description_alpha/xacro/- Xacro description filesros_ws/src/stonefish_bluerov2/data/alpha5/- OBJ mesh files for simulation
After running the script above, generate the URDF file manually:
conda activate ros_env
export ROS_PACKAGE_PATH=<project_root>/ros_ws/src:$ROS_PACKAGE_PATH
mkdir -p ros_ws/src/description_alpha/urdf
xacro ros_ws/src/description_alpha/xacro/alpha.config.xacro \
use_fake_hardware:=true use_sim:=false \
> ros_ws/src/description_alpha/urdf/alpha_robotiq_hand-e.urdfNote: Replace
<project_root>with the actual path to the u0env project root.
Change directory to the project directory and run:
./build_all.shUse the following commands to run the simulation environment in a terminal with graphical interface support (e.g., MobaXterm):
cd build/mysim_build
./FishSimconda activate ros_env
cd ros_ws
source devel/setup.bash
roslaunch stonefish_bluerov2 cpp_env.launchTask configurations are defined in exp_setting.csv. Run data collection with the unified batch script:
conda activate ros_env
cd ros_ws
source devel/setup.bash
cd ..
# Collect training set for all tasks
./batch_run.sh --mode collect --split train
# Collect test set for all tasks
./batch_run.sh --mode collect --split test
# Collect a specific task using task_code in exp_setting.csv, e.g., pick_pipe0_shallow
./batch_run.sh --mode collect --split train --task pick_pipe0_shallow
# Debug mode (only 1 episode per task)
./batch_run.sh --mode collect --split train --debug
# Run with 4 parallel (better less than 5) for all modes (collect/eval)
./batch_run.sh --mode collect --split train --parallel 4For more options, run ./batch_run.sh --help.
Create a separate conda environment and install lerobot:
conda deactivate
cd tools/lerobot/
conda create -y -n lerobot python=3.10
conda activate lerobot
pip install -e .
pip install scipyConvert the collected data to LeRobot dataset format:
python tools/dataprocess/data_process.py --src /path/to/dataset/origin --dst /path/to/dataset/lerobotFor more details, run:
python tools/dataprocess/data_process.py --helpFirst, start the VLA model inference service. Refer to section "7. Inference" in the u0model. Parallel evaluation requires launching the corresponding number of VLA models; with default non-parallel mode, only a single model needs to be started.
Run evaluation:
conda activate ros_env
cd ros_ws
source devel/setup.bash
cd ..
# Evaluate all tasks
./batch_run.sh --mode eval
# Evaluate a specific task with 4 parallel episodes
./batch_run.sh --mode eval --task pick_pipe0_shallow --parallel 4
# Debug mode (only 1 episode per task)
./batch_run.sh --mode eval --debugFor more options, run ./batch_run.sh --help.
# Compute metrics for all tasks
python tools/evalmetrics/calculate_metrics.py --eval-dir ./dataset/eval --no-visualize
# Compute metrics with visualization
python tools/evalmetrics/calculate_metrics.py --eval-dir ./dataset/eval
# Compute metrics for specific tasks
python tools/evalmetrics/calculate_metrics.py --eval-dir ./dataset/eval --task-codes pick_pipe0_shallow scan_ship_ancientResults are saved to <eval-dir>/metrics/ as CSV and JSON.
This project is licensed under the Apache License 2.0.
Note: The
alpha_descriptionpackage (including mesh files, URDF/xacro descriptions, and configuration files) is proprietary software owned by Reach Robotics and is not covered by the Apache 2.0 license of this project. Users must independently obtain the appropriate license from Reach Robotics before downloading or using those materials. The generated derivative files (e.g., converted OBJ meshes) are also subject to the original license terms.
This is an independent research project and is not affiliated with, endorsed by, or maintained by Reach Robotics. All trademarks and product names mentioned in this project (including but not limited to Alpha5, BlueROV2) are the property of their respective owners. Please refer to the Alpha repository for software and documentation.
If you use this work in your research, please cite:
@misc{gu2025usimu0visionlanguageactiondataset,
title={USIM and U0: A Vision-Language-Action Dataset and Model for General Underwater Robots},
author={Junwen Gu and Zhiheng Wu and Pengxuan Si and Shuang Qiu and Yukai Feng and Luoyang Sun and Laien Luo and Lianyi Yu and Jian Wang and Zhengxing Wu},
year={2025},
eprint={2510.07869},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2510.07869},
}- Stonefish β Underwater physics simulation engine
- stonefish_ros β ROS interface for Stonefish
- LeRobot β Hugging Face LeRobot framework
- RoboStack β Conda-based ROS distribution