ArXiv TLDR

Revisiting Change VQA in Remote Sensing with Structured and Native Multimodal Qwen Models

🐦 Tweet
2604.18429

Yakoub Bazi, Mohamad M. Al Rahhal, Mansour Zuair, Faroun Mohamed

cs.CVcs.AI

TLDR

This paper revisits Change VQA in remote sensing using Qwen models, finding native multimodal architectures outperform structured pipelines.

Key contributions

  • Compared structured Qwen3-VL and native Qwen3.5 models on the CDVQA benchmark.
  • Demonstrated recent VLMs significantly improve performance over previous specialized baselines.
  • Showed native multimodal architectures are more effective for Change VQA than structured pipelines.
  • Highlighted that performance does not scale monotonically with model size.

Why it matters

This paper provides crucial insights into effective VLM architectures for remote sensing Change VQA. It highlights the importance of tightly integrated multimodal backbones over just model scale or explicit visual conditioning.

Original Abstract

Change visual question answering (Change VQA) addresses the problem of answering natural-language questions about semantic changes between bi-temporal remote sensing (RS) images. Although vision-language models (VLMs) have recently been studied for temporal RS image understanding, Change VQA remains underexplored in the context of modern multimodal models. In this letter, we revisit the CDVQA benchmark using recent Qwen models under a unified low-rank adaptation (LoRA) setting. We compare Qwen3-VL, which follows a structured vision-language pipeline with multi-depth visual conditioning and a full-attention decoder, with Qwen3.5, a native multimodal model that combines a single-stage alignment with a hybrid decoder backbone. Experimental results on the official CDVQA test splits show that recent VLMs improve over earlier specialized baselines. They further show that performance does not scale monotonically with model size, and that native multimodal models are more effective than structured vision-language pipelines for this task. These findings indicate that tightly integrated multimodal backbones contribute more to performance than scale or explicit multi-depth visual conditioning for language-driven semantic change reasoning in RS imagery.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.