UniDriveVLA: Unifying Understanding, Perception, and Action Planning for Autonomous Driving
Yongkang Li, Lijun Zhou, Sixu Yan, Bencheng Liao, Tianyi Yan + 9 more
TLDR
UniDriveVLA unifies autonomous driving tasks by decoupling perception and reasoning with expert Mixture-of-Transformers, achieving SOTA performance.
Key contributions
- Introduces UniDriveVLA, a VLA model using Mixture-of-Transformers for autonomous driving.
- Resolves perception-reasoning conflict via expert decoupling for understanding, perception, and planning.
- Employs sparse perception and a three-stage training strategy to enhance spatial perception.
- Achieves state-of-the-art performance in open-loop and closed-loop driving evaluations.
Why it matters
This paper introduces a novel approach to overcome a critical limitation in Vision-Language-Action models for autonomous driving. By decoupling spatial perception and semantic reasoning, UniDriveVLA enables more robust and unified driving systems. This advancement could lead to safer and more capable self-driving vehicles.
Original Abstract
Vision-Language-Action (VLA) models have recently emerged in autonomous driving, with the promise of leveraging rich world knowledge to improve the cognitive capabilities of driving systems. However, adapting such models for driving tasks currently faces a critical dilemma between spatial perception and semantic reasoning. Consequently, existing VLA systems are forced into suboptimal compromises: directly adopting 2D Vision-Language Models yields limited spatial perception, whereas enhancing them with 3D spatial representations often impairs the native reasoning capacity of VLMs. We argue that this dilemma largely stems from the coupled optimization of spatial perception and semantic reasoning within shared model parameters. To overcome this, we propose UniDriveVLA, a Unified Driving Vision-Language-Action model based on Mixture-of-Transformers that addresses the perception-reasoning conflict via expert decoupling. Specifically, it comprises three experts for driving understanding, scene perception, and action planning, which are coordinated through masked joint attention. In addition, we combine a sparse perception paradigm with a three-stage progressive training strategy to improve spatial perception while maintaining semantic reasoning capability. Extensive experiments show that UniDriveVLA achieves state-of-the-art performance in open-loop evaluation on nuScenes and closed-loop evaluation on Bench2Drive. Moreover, it demonstrates strong performance across a broad range of perception, prediction, and understanding tasks, including 3D detection, online mapping, motion forecasting, and driving-oriented VQA, highlighting its broad applicability as a unified model for autonomous driving. Code and model have been released at https://github.com/xiaomi-research/unidrivevla
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.