ArXiv TLDR

FrameSkip: Learning from Fewer but More Informative Frames in VLA Training

🐦 Tweet
2605.13757

Bin Yu, Shijie Lian, Xiaopeng Lin, Zhaolong Shen, Yuliang Wei + 6 more

cs.RO

TLDR

FrameSkip improves VLA policy training by selecting fewer, more informative frames from robot demonstrations, boosting success rates.

Key contributions

  • Addresses temporal imbalance in VLA training by selecting critical, informative frames from demonstrations.
  • FrameSkip scores frames using action variation, visual-action coherence, and task-progress priors.
  • Improves VLA success rates (76.15% vs 66.50%) on benchmarks while using only 20% of frames.

Why it matters

Current VLA training wastes resources on redundant frames, leading to inefficient learning. FrameSkip offers a simple yet effective solution, significantly boosting policy success rates while drastically reducing data requirements. This advancement is key for more efficient and scalable robot learning.

Original Abstract

Vision-Language-Action (VLA) policies are commonly trained from dense robot demonstration trajectories, often collected through teleoperation, by sampling every recorded frame as if it provided equally useful supervision. We argue that this convention creates a temporal supervision imbalance: long low-change segments dominate the training stream, while manipulation-critical transitions such as alignment, contact, grasping, and release appear only sparsely. We introduce FrameSkip, a data-layer frame selection framework that scores trajectory frames using action variation, visual-action coherence, task-progress priors, and gripper-transition preservation, then remaps training samples toward high-importance frames under a target retention ratio. Because FrameSkip operates only in the dataloader, it leaves the VLA architecture, action head, training objective, and inference procedure unchanged. Across RoboCasa-GR1, SimplerEnv, and LIBERO, FrameSkip improves the success-retention trade-off over full-frame training and simpler frame selection variants, achieving a macro-average success rate of 76.15% across the three benchmarks compared with 66.50% for full-frame training while using a compressed trajectory view that retains 20% of unique frames in the main setting.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.