ArXiv TLDR

A Deployable Embodied Vision-Language Navigation System with Hierarchical Cognition and Context-Aware Exploration

🐦 Tweet
2604.21363

Kuan Xu, Ruimeng Liu, Yizhuo Yang, Denan Liang, Tongxing Jin + 3 more

cs.RO

TLDR

A deployable vision-language navigation system achieves efficient, robust navigation on real robots via hierarchical cognition and context-aware exploration.

Key contributions

  • Decouples the system into asynchronous perception, memory integration, and reasoning modules.
  • Constructs a cognitive memory graph for scene encoding and VLM-based high-level reasoning.
  • Formulates the exploration problem as a context-aware Weighted Traveling Repairman Problem (WTRP).
  • Achieves improved navigation success and efficiency on real-world robotic platforms.

Why it matters

This paper addresses the critical trade-off between strong reasoning capabilities and efficient deployment in vision-language navigation (VLN). It enables real-time, robust navigation on resource-constrained robotic platforms. This system significantly advances practical embodied AI by making sophisticated VLN deployable in real-world scenarios.

Original Abstract

Bridging the gap between embodied intelligence and embedded deployment remains a key challenge in intelligent robotic systems, where perception, reasoning, and planning must operate under strict constraints on computation, memory, energy, and real-time execution. In vision-language navigation (VLN), existing approaches often face a fundamental trade-off between strong reasoning capabilities and efficient deployment on real-world platforms. In this paper, we present a deployable embodied VLN system that achieves both high efficiency and robust high-level reasoning on real-world robotic platforms. To achieve this, we decouple the system into three asynchronous modules: a real-time perception module for continuous environment sensing, a memory integration module for spatial-semantic aggregation, and a reasoning module for high-level decision making. We incrementally construct a cognitive memory graph to encode scene information, which is further decomposed into subgraphs to enable reasoning with a vision-language model (VLM). To further improve navigation efficiency and accuracy, we also leverage the cognitive memory graph to formulate the exploration problem as a context-aware Weighted Traveling Repairman Problem (WTRP), which minimizes the weighted waiting time of viewpoints. Extensive experiments in both simulation and real-world robotic platforms demonstrate improved navigation success and efficiency over existing VLN approaches, while maintaining real-time performance on resource-constrained hardware.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.