ArXiv TLDR

AnchorSeg: Language Grounded Query Banks for Reasoning Segmentation

🐦 Tweet
2604.18562

Rui Qian, Chuanhang Deng, Qiang Huang, Jian Xiong, Mingxuan Li + 4 more

cs.CV

TLDR

AnchorSeg improves reasoning segmentation by using language-grounded query banks to explicitly decouple semantic reasoning from spatial localization.

Key contributions

  • Reformulates reasoning segmentation as structured conditional generation over image tokens.
  • Uses ordered query banks (reasoning tokens, segmentation anchor) to decouple semantic reasoning and spatial grounding.
  • Models spatial conditioning via a factorized distribution, with anchor for localization and contextual queries for semantics.
  • Introduces Token-Mask Cycle Consistency (TMCC) for robust alignment between token and pixel-level supervision.

Why it matters

Existing reasoning segmentation models struggle to disentangle semantic reasoning from spatial localization. AnchorSeg solves this by explicitly decoupling these aspects using structured query banks, leading to more precise and interpretable results. It achieves state-of-the-art performance on ReasonSeg.

Original Abstract

Reasoning segmentation requires models to ground complex, implicit textual queries into precise pixel-level masks. Existing approaches rely on a single segmentation token $\texttt{<SEG>}$, whose hidden state implicitly encodes both semantic reasoning and spatial localization, limiting the model's ability to explicitly disentangle what to segment from where to segment. We introduce AnchorSeg, which reformulates reasoning segmentation as a structured conditional generation process over image tokens, conditioned on language grounded query banks. Instead of compressing all semantic reasoning and spatial localization into a single embedding, AnchorSeg constructs an ordered sequence of query banks: latent reasoning tokens that capture intermediate semantic states, and a segmentation anchor token that provides explicit spatial grounding. We model spatial conditioning as a factorized distribution over image tokens, where the anchor query determines localization signals while contextual queries provide semantic modulation. To bridge token-level predictions and pixel-level supervision, we propose Token--Mask Cycle Consistency (TMCC), a bidirectional training objective that enforces alignment across resolutions. By explicitly decoupling spatial grounding from semantic reasoning through structured language grounded query banks, AnchorSeg achieves state-of-the-art results on ReasonSeg test set (67.7\% gIoU and 68.1\% cIoU). All code and models are publicly available at https://github.com/rui-qian/AnchorSeg.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.