Unified 4D World Action Modeling from Video Priors with Asynchronous Denoising
Jun Guo, Qiwei Li, Peiyan Li, Zilong Chen, Nan Sun + 5 more
TLDR
X-WAM is a unified 4D world model that combines real-time robotic action with high-fidelity 4D synthesis using video priors and asynchronous denoising.
Key contributions
- Introduces X-WAM, a unified 4D world model for real-time robotic action and high-fidelity 4D synthesis.
- Leverages pretrained video diffusion models to predict multi-view RGB-D videos for future world imagination.
- Proposes Asynchronous Noise Sampling (ANS) for efficient action decoding and high-fidelity video generation.
- Achieves high success rates (79.2%, 90.7%) on robotic benchmarks and superior 4D reconstruction/generation.
Why it matters
This paper introduces X-WAM, a novel 4D world model that overcomes limitations of prior 2D models by unifying real-time robotic action with high-fidelity 4D world synthesis. Its innovative use of video priors and asynchronous denoising significantly improves both action efficiency and visual quality. This advancement is crucial for developing more capable and realistic robotic systems.
Original Abstract
We propose X-WAM, a Unified 4D World Model that unifies real-time robotic action execution and high-fidelity 4D world synthesis (video + 3D reconstruction) in a single framework, addressing the critical limitations of prior unified world models (e.g., UWM) that only model 2D pixel-space and fail to balance action efficiency and world modeling quality. To leverage the strong visual priors of pretrained video diffusion models, X-WAM imagines the future world by predicting multi-view RGB-D videos, and obtains spatial information efficiently through a lightweight structural adaptation: replicating the final few blocks of the pretrained Diffusion Transformer into a dedicated depth prediction branch for the reconstruction of future spatial information. Moreover, we propose Asynchronous Noise Sampling (ANS) to jointly optimize generation quality and action decoding efficiency. ANS applies a specialized asynchronous denoising schedule during inference, which rapidly decodes actions with fewer steps to enable efficient real-time execution, while dedicating the full sequence of steps to generate high-fidelity video. Rather than entirely decoupling the timesteps during training, ANS samples from their joint distribution to align with the inference distribution. Pretrained on over 5,800 hours of robotic data, X-WAM achieves 79.2% and 90.7% average success rate on RoboCasa and RoboTwin 2.0 benchmarks, while producing high-fidelity 4D reconstruction and generation surpassing existing methods in both visual and geometric metrics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.