ArXiv TLDR

Unlocking Patch-Level Features for CLIP-Based Class-Incremental Learning

🐦 Tweet
2605.13835

Hao Sun, Zi-Jun Ding, Da-Wei Zhou

cs.CV

TLDR

This paper introduces SPA, a novel method that unlocks and aligns CLIP's patch-level features with semantic descriptions for state-of-the-art class-incremental learning.

Key contributions

  • Proposes SPA to leverage CLIP's rich patch-level semantic information for Class-Incremental Learning.
  • Uses GPT-5 to generate class-wise semantic descriptions, guiding discriminative patch feature selection.
  • Applies optimal transport to align selected patch tokens with semantic tokens from descriptions.
  • Introduces task-specific projectors and pseudo-features to adapt and mitigate catastrophic forgetting.

Why it matters

Current CLIP-based CIL methods overlook valuable patch-level semantic information, limiting their potential. This paper's SPA method addresses this by effectively utilizing local features, leading to state-of-the-art performance in continuous learning. It advances our understanding of how to better leverage pre-trained vision-language models for CIL.

Original Abstract

Class-Incremental Learning (CIL) enables models to continuously integrate new knowledge while mitigating catastrophic forgetting. Driven by the remarkable generalization of CLIP, leveraging pre-trained vision-language models has become a dominant paradigm in CIL. However, current work primarily focuses on aligning global image embeddings (i.e., [CLS] token) with their corresponding text prompts (i.e., [EOS] token). Despite their good performance, we find that they discard the rich patch-level semantic information inherent in CLIP's encoders. For instance, when recognizing a rabbit, local patches may encode its distinctive cues, such as long ears and a fluffy tail, which can provide complementary evidence for recognition. Based on the above observation, we propose SPA (Semantic-guided Patch-level Alignment) for CLIP-based CIL, which aims to awaken long-neglected local representations within CLIP. Specifically, for each class, we first construct representative and diverse visual samples and feed them to GPT-5 as visual guidance to generate class-wise semantic descriptions. These descriptions are used to guide the selection of discriminative patch-level visual features. Building upon these selected patches, we further employ optimal transport to align selected patch tokens with semantic tokens from class-wise descriptions, yielding a structured cross-modal alignment that improves recognition. Furthermore, we introduce task-specific projectors for effective adaptation to downstream incremental tasks, and sample pseudo-features from stored class-wise Gaussian statistics to calibrate old-class representations, thereby mitigating catastrophic forgetting. Extensive experiments demonstrate that SPA achieves state-of-the-art performance.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.