ArXiv TLDR

Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise

🐦 Tweet
2604.09532

Zibin Geng, Xuefeng Jiang, Jia Li, Zheng Li, Tian Wen + 4 more

cs.CVcs.AI

TLDR

VisPrompt is a vision-guided cross-modal prompt learning framework that robustly learns prompts for VLMs even with noisy labels.

Key contributions

  • VisPrompt: a lightweight, robust vision-guided prompt learning framework for noisy labels.
  • Employs cross-modal attention to inject stable visual semantics into prompt representations.
  • Introduces conditional modulation to adaptively balance visual evidence and text priors.
  • Significantly improves robustness on 7 benchmarks with minimal parameters and frozen VLM backbone.

Why it matters

Prompt learning is efficient but vulnerable to label noise, limiting its real-world applicability. This paper offers a robust, parameter-efficient solution by leveraging visual information, making VLMs more practical for noisy datasets without extensive retraining.

Original Abstract

Prompt learning is a parameter-efficient approach for vision-language models, yet its robustness under label noise is less investigated. Visual content contains richer and more reliable semantic information, which remains more robust under label noise. However, the prompt itself is highly susceptible to label noise. Motivated by this intuition, we propose VisPrompt, a lightweight and robust vision-guided prompt learning framework for noisy-label settings. Specifically, we exploit a cross-modal attention mechanism to reversely inject visual semantics into prompt representations. This enables the prompt tokens to selectively aggregate visual information relevant to the current sample, thereby improving robustness by anchoring prompt learning to stable instance-level visual evidence and reducing the influence of noisy supervision. To address the instability caused by using the same way of injecting visual information for all samples, despite differences in the quality of their visual cues, we further introduce a lightweight conditional modulation mechanism to adaptively control the strength of visual information injection, which strikes a more robust balance between text-side semantic priors and image-side instance evidence. The proposed framework effectively suppresses the noise-induced disturbances, reduce instability in prompt updates, and alleviate memorization of mislabeled samples. VisPrompt significantly improves robustness while keeping the pretrained VLM backbone frozen and introducing only a small amount of additional trainable parameters. Extensive experiments under synthetic and real-world label noise demonstrate that VisPrompt generally outperforms existing baselines on seven benchmark datasets and achieves stronger robustness. Our code is publicly available at https://github.com/gezbww/Vis_Prompt.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.