ArXiv TLDR

Ramen: Robust Test-Time Adaptation of Vision-Language Models with Active Sample Selection

🐦 Tweet
2604.21728

Wenxuan Bao, Yanjun Zhao, Xiyuan Yang, Jingrui He

cs.CVcs.LG

TLDR

Ramen is a framework for robust test-time adaptation of vision-language models, handling mixed-domain shifts via active sample selection.

Key contributions

  • Handles mixed-domain shifts in vision-language models during test-time adaptation.
  • Employs active sample selection using domain consistency and prediction balance criteria.
  • Utilizes an embedding-gradient cache for efficient sample retrieval and model updates.

Why it matters

Existing test-time adaptation methods fail in real-world mixed-domain scenarios. Ramen offers a robust and efficient solution, significantly improving the reliability and applicability of vision-language models.

Original Abstract

Pretrained vision-language models such as CLIP exhibit strong zero-shot generalization but remain sensitive to distribution shifts. Test-time adaptation adapts models during inference without access to source data or target labels, offering a practical way to handle such shifts. However, existing methods typically assume that test samples come from a single, consistent domain, while in practice, test data often include samples from mixed domains with distinct characteristics. Consequently, their performance degrades under mixed-domain settings. To address this, we present Ramen, a framework for robust test-time adaptation through active sample selection. For each incoming test sample, Ramen retrieves a customized batch of relevant samples from previously seen data based on two criteria: domain consistency, which ensures that adaptation focuses on data from similar domains, and prediction balance, which mitigates adaptation bias caused by skewed predictions. To improve efficiency, Ramen employs an embedding-gradient cache that stores the embeddings and sample-level gradients of past test images. The stored embeddings are used to retrieve relevant samples, and the corresponding gradients are aggregated for model updates, eliminating the need for any additional forward or backward passes. Our theoretical analysis provides insight into why the proposed adaptation mechanism is effective under mixed-domain shifts. Experiments on multiple image corruption and domain-shift benchmarks demonstrate that Ramen achieves strong and consistent performance, offering robust and efficient adaptation in complex mixed-domain scenarios. Our code is available at https://github.com/baowenxuan/Ramen .

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.