ViVa: A Video-Generative Value Model for Robot Reinforcement Learning
Jindi Lv, Hao Li, Jie Li, Yifei Nie, Fankun Kong + 8 more
TLDR
ViVa is a video-generative value model that improves robot reinforcement learning by using a pretrained video generator to estimate future dynamics and task value.
Key contributions
- ViVa repurposes a pretrained video generator to estimate value and predict future robot proprioception.
- It grounds value estimation in anticipated embodiment dynamics by leveraging spatiotemporal priors from video.
- Integrated into RECAP, ViVa significantly improves real-world robot manipulation tasks like box assembly.
- The model provides more reliable value signals and generalizes effectively to novel objects.
Why it matters
This paper addresses a key challenge in robot reinforcement learning: reliably estimating value in long-horizon tasks. By leveraging video generation, ViVa offers a novel approach to integrate temporal dynamics into value functions. This could lead to more robust and generalizable robotic systems.
Original Abstract
Vision-language-action (VLA) models have advanced robot manipulation through large-scale pretraining, but real-world deployment remains challenging due to partial observability and delayed feedback. Reinforcement learning addresses this via value functions, which assess task progress and guide policy improvement. However, existing value models built on vision-language models (VLMs) struggle to capture temporal dynamics, undermining reliable value estimation in long-horizon tasks. In this paper, we propose ViVa, a video-generative value model that repurposes a pretrained video generator for value estimation. Taking the current observation and robot proprioception as input, ViVa jointly predicts future proprioception and a scalar value for the current state. By leveraging the spatiotemporal priors of a pretrained video generator, our approach grounds value estimation in anticipated embodiment dynamics, moving beyond static snapshots to intrinsically couple value with foresight. Integrated into RECAP, ViVa delivers substantial improvements on real-world box assembly. Qualitative analysis across all three tasks confirms that ViVa produces more reliable value signals, accurately reflecting task progress. By leveraging spatiotemporal priors from video corpora, ViVa also generalizes to novel objects, highlighting the promise of video-generative models for value estimation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.