Privileged Foresight Distillation: Zero-Cost Future Correction for World Action Models
Pengcheng Fang, Hongli Chen, Xiaohao Cai
TLDR
Privileged Foresight Distillation (PFD) improves world action models by distilling future-conditioned corrections into current-only policies, enhancing manipulation performance.
Key contributions
- Introduces Privileged Foresight Distillation (PFD) to transfer future-conditioned corrections.
- Formulates foresight as an action-denoising residual, distilled from a teacher into a current-only student adapter.
- Achieves consistent performance gains on LIBERO and RoboTwin manipulation benchmarks.
- Maintains current-only inference with negligible latency, without generating future video.
Why it matters
This paper redefines the role of future information in world action models, showing it's a distillable correction, not just a regularizer. It offers a practical method to leverage future insights during training for improved current-only robot control, enhancing manipulation performance without increasing inference complexity.
Original Abstract
World action models jointly predict future video and action during training, raising an open question about what role the future-prediction branch actually plays. A recent finding shows that this branch can be removed at inference with little to no loss on common manipulation benchmarks, suggesting that future information may act merely as a regularizer on the shared visual backbone. We propose instead that joint training induces an action-conditioned correction that privileged future observations impose on action denoising, and that current-only policies capture this correction only partially. Making the account precise, we formulate privileged foresight as a residual in the action-denoising direction -- the difference between what a model predicts given the true future and what it predicts given only the current frame -- and introduce \emph{Privileged Foresight Distillation (PFD)}, which transfers this residual from a training-time teacher into a small adapter on a current-only student. The teacher and student share the same backbone and differ only in the attention mask over video tokens; future video is never generated at inference. Controlled experiments verify that this gain reflects a genuine future-conditioned correction rather than a side effect of capacity or regularization. Empirically, PFD achieves consistent improvements on LIBERO and RoboTwin manipulation benchmarks while preserving the current-only inference interface at negligible added latency. This view reframes the role of future information in world action models: not as a target to predict, nor as a regularizer to absorb, but as a compressible correction to be distilled.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.