ATLAS: An Annotation Tool for Long-horizon Robotic Action Segmentation
Sergej Stanovcic, Daniel Sliwowski, Dongheui Lee
TLDR
ATLAS is a new annotation tool for long-horizon robotic action segmentation, offering multi-modal data visualization and reducing annotation time and error.
Key contributions
- Introduces ATLAS, a specialized tool for long-horizon robotic action segmentation.
- Provides time-synchronized visualization of multi-modal robotic data, including video and proprioceptive signals.
- Natively supports common robotics dataset formats like ROS bags and RLDS, with a modular extension layer.
- Reduces annotation time by 6% and improves temporal alignment and boundary accuracy fivefold.
Why it matters
Annotating robotic actions is crucial but existing tools are limited. ATLAS provides a dedicated, efficient solution for multi-modal robotic data, significantly streamlining dataset creation. This accelerates research in action segmentation and manipulation policy learning.
Original Abstract
Annotating long-horizon robotic demonstrations with precise temporal action boundaries is crucial for training and evaluating action segmentation and manipulation policy learning methods. Existing annotation tools, however, are often limited: they are designed primarily for vision-only data, do not natively support synchronized visualization of robot-specific time-series signals (e.g., gripper state or force/torque), or require substantial effort to adapt to different dataset formats. In this paper, we introduce ATLAS, an annotation tool tailored for long-horizon robotic action segmentation. ATLAS provides time-synchronized visualization of multi-modal robotic data, including multi-view video and proprioceptive signals, and supports annotation of action boundaries, action labels, and task outcomes. The tool natively handles widely used robotics dataset formats such as ROS bags and the Reinforcement Learning Dataset (RLDS) format, and provides direct support for specific datasets such as REASSEMBLE. ATLAS can be easily extended to new formats via a modular dataset abstraction layer. Its keyboard-centric interface minimizes annotation effort and improves efficiency. In experiments on a contact-rich assembly task, ATLAS reduced the average per-action annotation time by at least 6% compared to ELAN, while the inclusion of time-series data improved temporal alignment with expert annotations by more than 2.8% and decreased boundary error fivefold compared to vision-only annotation tools.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.