File size: 5,611 Bytes
d0bc458 cbd9cdc d0bc458 5fc5402 8da54aa 5fc5402 a43f3d6 5fc5402 a43f3d6 5fc5402 a43f3d6 5fc5402 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | ---
license: apache-2.0
task_categories:
- robotics
language:
- en
tags:
- robotics
- embodied-ai
- dynamic-manipulation
- vision-language-action
- manipulation
- trajectory
pretty_name: DOMINO
size_categories:
- 100K<n<1M
---
<h1 align="center"> Towards Generalizable Robotic Manipulation in Dynamic Environments </h1>
<div align="center">
<a href="https://arxiv.org/abs/2603.15620"><img src="https://img.shields.io/badge/arXiv-Paper-b31b1b?logo=Arxiv"></a>
<a href="https://h-embodvis.github.io/DOMINO/"><img src="https://img.shields.io/badge/Homepage-project-orange.svg?logo=googlehome"></a>
<a href="https://github.com/H-EmbodVis/DOMINO/"><img src="https://img.shields.io/badge/GitHub-Repository-green?logo=github"></a>
<a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache%202.0-blue?style=flat-square"></a>
<h5 align="center"><em>Heng Fang<sup>1</sup>, Shangru Li<sup>1</sup>, Shuhan Wang<sup>1</sup>, Xuanyang Xi<sup>2</sup>, Dingkang Liang<sup>1</sup>, Xiang Bai<sup>1</sup> </em></h5>
<sup>1</sup> Huazhong University of Science and Technology, <sup>2</sup> Huawei Technologies Co. Ltd
</div>
## ๐ Overview
Dynamic manipulation requires robots to continuously adapt to moving objects and unpredictable environmental changes. Existing Vision-Language-Action (VLA) models rely on static single-frame observations, failing to capture essential spatiotemporal dynamics. We introduce **DOMINO**, a comprehensive benchmark for this underexplored frontier, and **PUMA**, a predictive architecture that couples historical motion cues with future state anticipation to achieve highly reactive embodied intelligence.
<details>
<summary>Abstract
</summary>
Vision-Language-Action (VLA) models excel in static manipulation but struggle in dynamic environments with moving targets. This performance gap primarily stems from a scarcity of dynamic manipulation datasets and the reliance of mainstream VLAs on single-frame observations, restricting their spatiotemporal reasoning capabilities. To address this, we introduce DOMINO, a large-scale dataset and benchmark for generalizable dynamic manipulation, featuring 35 tasks with hierarchical complexities, over 110K expert trajectories, and a multi-dimensional evaluation suite. Through comprehensive experiments, we systematically evaluate existing VLAs on dynamic tasks, explore effective training strategies for dynamic awareness, and validate the generalizability of dynamic data. Furthermore, we propose PUMA, a dynamics-aware VLA architecture. By integrating scene-centric historical optical flow and specialized world queries to implicitly forecast object-centric future states, PUMA couples history-aware perception with short-horizon prediction. Results demonstrate that PUMA achieves state-of-the-art performance, yielding a 6.3% absolute improvement in success rate over baselines. Moreover, we show that training on dynamic data fosters robust spatiotemporal representations that transfer to static tasks.
</details>
### โจ Key Idea
* Current VLA models struggle with dynamic manipulation tasks due to a scarcity of dynamic datasets and a reliance on single-frame observations.
* We introduce DOMINO, a large-scale benchmark for dynamic manipulation comprising 35 tasks and over 110K expert trajectories.
* We propose PUMA, a dynamics-aware VLA architecture that integrates historical optical flow and world queries to forecast future object states.
* Training on dynamic data fosters robust spatiotemporal representations, demonstrating enhanced generalization capabilities.
## ๐ Dataset Summary
**DOMINO** is a large-scale, comprehensive dataset and benchmark designed for generalizable robotic manipulation in dynamic environments. It addresses the critical scarcity of dynamic manipulation data, which limits the spatiotemporal reasoning capabilities of existing Vision-Language-Action (VLA) models.
- **Total Trajectories:** 117,000 expert demonstrations.
- **Tasks:** 35 distinct dynamic manipulation tasks with hierarchical complexities.
- **Robot Platforms:** Multi-embodiment coverage including `franka-panda`, `ur5-wsg`, `aloha-agilex`, `ARX-X5`, and `piper`.
- **Environment Settings:** Includes both `clean` and `randomized`.
- **Difficulty Levels:** Progressively harder dynamic scenarios labeled from `level1` to `level3`.
## ๐ Dataset Structure
The dataset is organized hierarchically by task name. Inside each task folder, the trajectories are packaged into `.zip` files grouped by robot type, environmental condition, difficulty level, and trajectory count.
A typical file path looks like this:
```
dataset/<task_name>/<robot_name>_<condition>_<level>_<trajectory_count>.zip
```
## ๐ ๏ธ Usage
You can download the full dataset using either the Hugging Face CLI or the Python SDK.
**Option 1: Hugging Face CLI (Recommended)**
```bash
pip install -U huggingface_hub
huggingface-cli download --repo-type dataset H-EmbodVis/DOMINO --local-dir DOMINO-dataset
```
**Option 2: Python SDK**
```python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="H-EmbodVis/DOMINO",
repo_type="dataset",
local_dir="DOMINO-dataset"
)
```
## ๐ Citation
If you find this work useful, please consider citing:
```bibtex
@article{fang2026towards,
title={Towards Generalizable Robotic Manipulation in Dynamic Environments},
author={Fang, Heng and Li, Shangru and Wang, Shuhan and Xi, Xuanyang and Liang, Dingkang and Bai, Xiang},
journal={arXiv preprint arXiv:2603.15620},
year={2026}
}
``` |