ArXiv TLDR

Goal2Skill: Long-Horizon Manipulation with Adaptive Planning and Reflection

🐦 Tweet
2604.13942

Zhen Liu, Xinyu Ning, Zhe Hu, Xinxin Xie, Weize Li + 6 more

cs.RO

TLDR

Goal2Skill is a dual-system framework for long-horizon manipulation, combining high-level planning with low-level visuomotor control for robust task execution.

Key contributions

  • Proposes Goal2Skill, a dual-system framework for long-horizon embodied manipulation.
  • High-level VLM-based planner handles task decomposition, memory, verification, and error correction.
  • Low-level VLA-based executor performs sub-task execution using diffusion-based action generation.
  • Achieves 32.4% average success rate on RMBench, outperforming baselines (9.8%).

Why it matters

This paper addresses brittleness in long-horizon manipulation by explicitly separating planning and execution. Its dual-system approach enables adaptive replanning and robust recovery, significantly improving success rates in complex, memory-dependent tasks.

Original Abstract

Recent vision-language-action (VLA) systems have demonstrated strong capabilities in embodied manipulation. However, most existing VLA policies rely on limited observation windows and end-to-end action prediction, which makes them brittle in long-horizon, memory-dependent tasks with partial observability, occlusions, and multi-stage dependencies. Such tasks require not only precise visuomotor control, but also persistent memory, adaptive task decomposition, and explicit recovery from execution failures. To address these limitations, we propose a dual-system framework for long-horizon embodied manipulation. Our framework explicitly separates high-level semantic reasoning from low-level motor execution. A high-level planner, implemented as a VLM-based agentic module, maintains structured task memory and performs goal decomposition, outcome verification, and error-driven correction. A low-level executor, instantiated as a VLA-based visuomotor controller, carries out each sub-task through diffusion-based action generation conditioned on geometry-preserving filtered observations. Together, the two systems form a closed loop between planning and execution, enabling memory-aware reasoning, adaptive replanning, and robust online recovery. Experiments on representative RMBench tasks show that the proposed framework substantially outperforms representative baselines, achieving a 32.4% average success rate compared with 9.8% for the strongest baseline. Ablation studies further confirm the importance of structured memory and closed-loop recovery for long-horizon manipulation.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.