ArXiv TLDR

$M^2$-VLA: Boosting Vision-Language Models for Generalizable Manipulation via Layer Mixture and Meta-Skills

🐦 Tweet
2604.24182

Siyao Xiao, Yuhong Zhang, Zhifang Liu, Zihan Gao, Jingye Zhang + 7 more

cs.RO

TLDR

$M^2$-VLA boosts VLM generalization for robotic manipulation using a Layer Mixture strategy and Meta Skill Module, avoiding catastrophic forgetting.

Key contributions

  • Addresses VLA model limitations by preventing catastrophic forgetting and improving generalization.
  • Proposes $M^2$-VLA, leveraging generalized VLMs directly as powerful backbones for robotic manipulation.
  • Introduces Mixture of Layers (MoL) to selectively extract task-critical information from VLM features.
  • Develops a Meta Skill Module (MSM) for efficient trajectory learning under constrained model capacity.

Why it matters

This paper tackles a critical challenge in Vision-Language-Action models: enabling VLMs to generalize for robotic control without catastrophic forgetting. By introducing the Mixture of Layers and Meta Skill Module, $M^2$-VLA offers a novel approach to bridge the gap between high-level VLM understanding and precise robotic manipulation. This work significantly advances the field of generalizable robotic manipulation.

Original Abstract

Current Vision-Language-Action (VLA) models predominantly rely on end-to-end fine-tuning. While effective, this paradigm compromises the inherent generalization capabilities of Vision-Language Models (VLMs) and incurs catastrophic forgetting. To address these limitations, we propose $M^2$-VLA, which demonstrates that a generalized VLM is able to serve as a powerful backbone for robotic manipulation directly. However, it remains a key challenge to bridge the gap between the high-level semantic understanding of VLMs and the precise requirements of robotic control. To overcome this, we introduce the Mixture of Layers (MoL) strategy that selectively extracts task-critical information from dense semantic features. Furthermore, to facilitate efficient trajectory learning under constrained model capacity, we propose a Meta Skill Module (MSM) that integrates strong inductive biases. Extensive experiments in both simulated and real-world environments demonstrate the effectiveness of our approach. Furthermore, generalization and ablation studies validate the architecture's zero-shot capabilities and confirm the contribution of each key component. Our code and pre-trained models will be made publicly available.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.