World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry
Yuejiang Liu, Fan Feng, Lingjing Kong, Weifeng Lu, Jinzhou Tang + 4 more
TLDR
WAV improves world model robustness by verifying state plausibility and action reachability, achieving 2x sample efficiency and 18% better policy performance.
Key contributions
- Introduces World Action Verifier (WAV) to self-improve world models by identifying prediction errors.
- Decomposes state prediction into state plausibility and action reachability, verifying each separately.
- Leverages action-free data and low-dim action features via subgoal generation and sparse inverse models.
- Achieves 2x higher sample efficiency and 18% improved policy performance across diverse tasks.
Why it matters
World models are crucial for scalable AI, but their robustness is a major hurdle. WAV offers a novel self-improvement mechanism by verifying predictions, especially in under-explored action spaces. This significantly boosts sample efficiency and policy performance, making world models more practical for complex real-world applications.
Original Abstract
General-purpose world models promise scalable policy evaluation, optimization, and planning, yet achieving the required level of robustness remains challenging. Unlike policy learning, which primarily focuses on optimal actions, a world model must be reliable over a much broader range of suboptimal actions, which are often insufficiently covered by action-labeled interaction data. To address this challenge, we propose World Action Verifier (WAV), a framework that enables world models to identify their own prediction errors and self-improve. The key idea is to decompose action-conditioned state prediction into two factors -- state plausibility and action reachability -- and verify each separately. We show that these verification problems can be substantially easier than predicting future states due to two underlying asymmetries: the broader availability of action-free data and the lower dimensionality of action-relevant features. Leveraging these asymmetries, we augment a world model with (i) a diverse subgoal generator obtained from video corpora and (ii) a sparse inverse model that infers actions from a subset of state features. By enforcing cycle consistency among generated subgoals, inferred actions, and forward rollouts, WAV provides an effective verification mechanism in under-explored regimes, where existing methods typically fail. Across nine tasks spanning MiniGrid, RoboMimic, and ManiSkill, our method achieves 2x higher sample efficiency while improving downstream policy performance by 18%.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.