DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation
Paper
•
2601.22153
•
Published
•
50
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Project Page | Paper | Code
TL;DR: DOM is a large-scale dynamic manipulation dataset with 200K episodes, 2,800+ scenes, and 206 objects for training and evaluating VLA models.
The Dynamic Object Manipulation (DOM) benchmark is designed to address the challenges of rapid perception and temporal anticipation in robotics. It includes:
If you find this dataset or the DynamicVLA framework useful for your research, please cite:
@article{xie2026dynamicvla,
title = {DynamicVLA: A Vision-Language-Action Model for
Dynamic Object Manipulation},
author = {Xie, Haozhe and
Wen, Beichen and
Zheng, Jiarui and
Chen, Zhaoxi and
Hong, Fangzhou icon,
Diao, Haiwen and
Liu, Ziwei},
journal = {arXiv},
volume = {2601.22153},
year = {2026}
}