ArXiv TLDR

Graph-based Semantic Calibration Network for Unaligned UAV RGBT Image Semantic Segmentation and A Large-scale Benchmark

🐦 Tweet
2604.26893

Fangqiang Fan, Zhicheng Zhao, Xiaoliang Ma, Chenglong Li, Jin Tang

cs.CV

TLDR

GSCNet addresses UAV RGBT semantic segmentation challenges by using a graph-based network for unaligned images and introducing a large-scale benchmark.

Key contributions

  • Proposes GSCNet to tackle cross-modal misalignment and semantic confusion in UAV RGBT segmentation.
  • FDAM decouples features and performs deformable alignment for robust spatial correction in unaligned images.
  • SGCM uses a category graph with hierarchical priors for graph-attention reasoning to calibrate predictions.
  • Introduces URTF, the largest fine-grained benchmark with 25,000+ unaligned RGBT image pairs across 61 categories.

Why it matters

This paper is crucial for advancing UAV scene understanding in challenging conditions. By addressing both spatial misalignment and semantic confusion, GSCNet improves fine-grained RGBT segmentation. The new URTF benchmark provides a vital resource for future research and development in this critical area.

Original Abstract

Fine-grained RGBT image semantic segmentation is crucial for all-weather unmanned aerial vehicle (UAV) scene understanding. However, UAV RGBT semantic segmentation faces two coupled challenges: cross-modal spatial misalignment caused by sensor parallax and platform vibration, and severe semantic confusion among fine-grained ground objects under top-down aerial views. To address these issues, we propose a Graph-based Semantic Calibration Network (GSCNet) for unaligned UAV RGBT image semantic segmentation. Specifically, we design a Feature Decoupling and Alignment Module (FDAM) that decouples each modality into shared structural and private perceptual components and performs deformable alignment in the shared subspace, enabling robust spatial correction with reduced modality appearance interference. Moreover, we propose a Semantic Graph Calibration Module (SGCM) that explicitly encodes the hierarchical taxonomy and co-occurrence regularities among ground-object categories in UAV scenes into a structured category graph, and incorporates these priors into graph-attention reasoning to calibrate predictions of visually similar and rare categories.In addition, we construct the Unaligned RGB-Thermal Fine-grained (URTF) benchmark, to the best of our knowledge, the largest and most fine-grained benchmark for unaligned UAV RGBT image semantic segmentation, containing over 25,000 image pairs across 61 categories with realistic cross-modal misalignment. Extensive experiments on URTF demonstrate that GSCNet significantly outperforms state-of-the-art methods, with notable gains on fine-grained categories. The dataset is available at https://github.com/mmic-lcl/Datasets-and-benchmark-code.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.