Reconstruction or Semantics? What Makes a Latent Space Useful for Robotic World Models
Nilaksh, Saurav Jha, Artem Zholus, Sarath Chandar
TLDR
This paper evaluates reconstruction vs. semantic latent spaces for robotic world models, finding semantic spaces better for policy-relevant tasks.
Key contributions
- Systematically compares six reconstruction and semantic encoders for robotic world models.
- Proposes three key evaluation axes: visual fidelity, planning/policy performance, and latent quality.
- Demonstrates visual fidelity alone is insufficient for selecting effective world models.
- Semantic encoders (e.g., V-JEPA) significantly outperform reconstruction encoders in policy-relevant tasks.
Why it matters
Robotic world models are crucial for testing robot control. This study provides a critical evaluation of latent spaces, guiding future development towards more effective, policy-relevant models. It highlights the importance of semantic understanding over mere pixel reconstruction for practical robotics.
Original Abstract
World model-based policy evaluation is a practical proxy for testing real-world robot control by rolling out candidate actions in action-conditioned video diffusion models. As these models increasingly adopt latent diffusion modeling (LDM), choosing the right latent space becomes critical. While the status quo uses autoencoding latent spaces like VAEs that are primarily trained for pixel reconstruction, recent work suggests benefits from pretrained encoders with representation-aligned semantic latent spaces. We systematically evaluate these latent spaces for action-conditioned LDM by comparing six reconstruction and semantic encoders to train world model variants under a fixed protocol on BridgeV2 dataset, and show effective world model training in high-dimensional representation spaces with and without dimension compression. We then propose three axes to assess robotic world model performance: visual fidelity, planning and downstream policy performance, and latent representation quality. Our results show visual fidelity alone is insufficient for world model selection. While reconstruction encoders like VAE and Cosmos achieve strong pixel-level scores, semantic encoders such as V-JEPA 2.1 (strongest overall on policy), Web-DINO, and SigLIP 2 generally excel across the other two axes at all model scales. Our study advocates semantic latent space as stronger foundation for policy-relevant robotics diffusion world models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.