FIRE-CIR: Fine-grained Reasoning for Composed Fashion Image Retrieval
François Gardères, Camille-Sovanneary Gauthier, Jean Ponce, Shizhe Chen
TLDR
FIRE-CIR improves fashion image retrieval by using question-driven visual reasoning to interpret modifications, outperforming SOTA and enhancing interpretability.
Key contributions
- Introduces FIRE-CIR for fine-grained compositional reasoning in fashion image retrieval.
- Uses question-driven visual reasoning, generating attribute-focused questions to verify modifications.
- Automatically constructs a large-scale fashion VQA dataset for training the reasoning system.
- Re-ranks candidates using explicit reasoning, outperforming SOTA and providing interpretable insights.
Why it matters
Current CIR models struggle with fine-grained reasoning and interpretability in fashion. This paper introduces a novel question-driven approach that not only achieves SOTA performance but also provides valuable, attribute-level insights. This makes the system more trustworthy and useful for practical applications.
Original Abstract
Composed image retrieval (CIR) aims to retrieve a target image that depicts a reference image modified by a textual description. While recent vision-language models (VLMs) achieve promising CIR performance by embedding images and text into a shared space for retrieval, they often fail to reason about what to preserve and what to change. This limitation hinders interpretability and yields suboptimal results, particularly in fine-grained domains like fashion. In this paper, we introduce FIRE-CIR, a model that brings compositional reasoning and interpretability to fashion CIR. Instead of relying solely on embedding similarity, FIRE-CIR performs question-driven visual reasoning: it automatically generates attribute-focused visual questions derived from the modification text, and verifies the corresponding visual evidence in both reference and candidate images. To train such a reasoning system, we automatically construct a large-scale fashion-specific visual question answering dataset, containing questions requiring either single- or dual-image analysis. During retrieval, our model leverages this explicit reasoning to re-rank candidate results, filtering out images inconsistent with the intended modifications. Experimental results on the Fashion IQ benchmark show that FIRE-CIR outperforms state-of-the-art methods in retrieval accuracy. It also provides interpretable, attribute-level insights into retrieval decisions.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.