LARY: A Latent Action Representation Yielding Benchmark for Generalizable Vision-to-Action Alignment
Dujun Nie, Fengjiao Chen, Qi Lv, Jun Kuang, Xiaoyu Li + 2 more
TLDR
LARY introduces a benchmark and dataset for evaluating latent action representations, showing general visual models excel and latent spaces align better with physical actions.
Key contributions
- Introduces LARY, a benchmark for evaluating latent action representations in vision-to-action alignment.
- Curates a large dataset: 1M+ videos, 151 actions, 620K image pairs, and 595K motion trajectories.
- Shows general visual foundation models, without action supervision, outperform specialized embodied models.
- Reveals latent-based visual space is fundamentally better aligned to physical action space than pixel-based.
Why it matters
This paper addresses the critical challenge of evaluating latent action representations for VLA models. Its findings suggest that general visual foundation models inherently encode action knowledge, pointing towards semantic-level abstraction as a more effective pathway from vision to action. This could significantly advance generalizable robotic control.
Original Abstract
While the shortage of explicit action data limits Vision-Language-Action (VLA) models, human action videos offer a scalable yet unlabeled data source. A critical challenge in utilizing large-scale human video datasets lies in transforming visual signals into ontology-independent representations, known as latent actions. However, the capacity of latent action representation to derive robust control from visual observations has yet to be rigorously evaluated. We introduce the Latent Action Representation Yielding (LARY) Benchmark, a unified framework for evaluating latent action representations on both high-level semantic actions (what to do) and low-level robotic control (how to do). The comprehensively curated dataset encompasses over one million videos (1,000 hours) spanning 151 action categories, alongside 620K image pairs and 595K motion trajectories across diverse embodiments and environments. Our experiments reveal two crucial insights: (i) General visual foundation models, trained without any action supervision, consistently outperform specialized embodied latent action models. (ii) Latent-based visual space is fundamentally better aligned to physical action space than pixel-based space. These results suggest that general visual representations inherently encode action-relevant knowledge for physical control, and that semantic-level abstraction serves as a fundamentally more effective pathway from vision to action than pixel-level reconstruction.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.