MolmoAct2: Action Reasoning Models for Real-world Deployment
Haoquan Fang, Jiafei Duan, Donovan Clay, Sam Wang, Shuo Liu + 24 more
TLDR
MolmoAct2 is an open-source action reasoning model for robots, featuring a specialized VLM, new datasets, and efficient architecture for real-world deployment.
Key contributions
- Introduces MolmoER, a VLM backbone specialized for spatial and embodied reasoning, trained on 3.3M samples.
- Releases MolmoAct2-BimanualYAM, the largest open bimanual dataset (720 hours), plus other robot datasets.
- Presents OpenFAST, an open-weight action tokenizer, and a novel architecture combining continuous and discrete action models.
- Proposes MolmoThink, an adaptive-depth reasoning method that significantly reduces latency for geometric grounding.
Why it matters
MolmoAct2 addresses critical limitations of current VLA models by providing a fully open, high-performing solution for robot control. Its innovations in specialized VLM backbones, large-scale datasets, and efficient reasoning architectures make advanced robotics more accessible and deployable. This work significantly pushes the frontier for generalist robot controllers.
Original Abstract
Vision-Language-Action (VLA) models aim to provide a single generalist controller for robots, but today's systems fall short on the criteria that matter for real-world deployment. Frontier models are closed, open-weight alternatives are tied to expensive hardware, reasoning-augmented policies pay prohibitive latency for their grounding, and fine-tuned success rates remain below the threshold for dependable use. We present MolmoAct2, a fully open action reasoning model built for practical deployment, advancing its predecessor along five axes. We introduce MolmoER, a VLM backbone specialized for spatial and embodied reasoning, trained on a 3.3M-sample corpus with a specialize-then-rehearse recipe. We release three new datasets spanning low-to-medium cost platforms, including MolmoAct2-BimanualYAM, 720 hours of teleoperated bimanual trajectories that constitute the largest open bimanual dataset to date, together with quality-filtered Franka (DROID) and SO100/101 subsets. We provide OpenFAST, an open-weight, open-data action tokenizer trained on millions of trajectories across five embodiments. We redesign the architecture to graft a flow-matching continuous-action expert onto a discrete-token VLM via per-layer KV-cache conditioning. Finally, we propose MolmoThink, an adaptive-depth reasoning variant that re-predicts depth tokens only for scene regions that change between timesteps, retaining geometric grounding at a fraction of prior latency. In the most extensive empirical study of any open VLA to date, spanning 7 simulation and real-world benchmarks, MolmoAct2 outperforms strong baselines including Pi-05, while MolmoER surpasses GPT-5 and Gemini Robotics ER-1.5 across 13 embodied-reasoning benchmarks. We release model weights, training code, and complete training data. Project page: https://allenai.org/blog/molmoact2
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.