ArXiv TLDR

LiDAR Teach, Radar Repeat: Robust Cross-Modal Navigation in Degenerate and Varying Environments

🐦 Tweet
2605.02809

Renxiang Xiao, Yichen Chen, Yuanfan Zhang, Qianyi Shao, Yushuai Chen + 3 more

cs.RO

TLDR

LTR$^2$ is a novel LiDAR-Teach-and-Radar-Repeat system enabling robust, cross-modal navigation in diverse, challenging environments with centimeter-level accuracy.

Key contributions

  • Proposes LTR$^2$, a cross-modal LiDAR-Teach and Radar-Repeat system for robust navigation.
  • Introduces a Cross-Modal Registration (CMR) network for aligning 4D radar with 3D LiDAR data.
  • Develops an adaptive fine-tuning strategy for long-term adaptability to static environmental changes.
  • Achieves centimeter-level accuracy and strong robustness in long-term, large-scale deployments.

Why it matters

This paper addresses a critical challenge in autonomous navigation: robust operation in varying and degraded environments. By combining LiDAR's precision with radar's resilience, LTR$^2$ offers a practical and highly accurate solution. Its adaptability and proven performance in real-world conditions significantly advance long-term autonomy.

Original Abstract

Long-term autonomy requires robust navigation in environments subject to dynamic and static changes, as well as adverse weather conditions. Teach-and-Repeat (T\&R) navigation offers a reliable and cost-effective solution by avoiding the need for consistent global mapping; however, existing T\&R systems lack a systematic solution to tackle various environmental variations such as weather degradation, ephemeral dynamics, and structural changes. This work proposes LTR$^2$, the first cross-modal, cross-platform LiDAR-Teach-and-Radar-Repeat system that systematically addresses these challenges. LTR$^2$ leverages LiDAR during the teaching phase to capture precise structural information under normal conditions and utilizes 4D millimeter-wave radar during the repeating phase for robust operation under environmental degradations. To align sparse and noisy forward-looking 4D radar with dense and accurate omnidirectional 3D LiDAR data, we introduce a Cross-Modal Registration (CMR) network that jointly exploits Doppler-based motion priors and the physical laws governing LiDAR intensity and radar power density. Furthermore, we propose an adaptive fine-tuning strategy that incrementally updates the CMR network based on localization errors, enabling long-term adaptability to static environmental changes without ground-truth labels. We demonstrate that the proposed CMR network achieves state-of-the-art cross-modal registration performance on the open-access dataset. Then we validate LTR$^2$ across three robot platforms over a large-scale, long-term deployment (40+ km over 6 months), including challenging conditions such as nighttime smoke. Experimental results and ablation studies demonstrate centimeter-level accuracy and strong robustness against diverse environmental disturbances, significantly outperforming existing approaches.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.