CLIP-Inspector: Model-Level Backdoor Detection for Prompt-Tuned CLIP via OOD Trigger Inversion
Akshit Jindal, Saket Anand, Chetan Arora, Vikram Goyal
TLDR
CLIP-Inspector detects and repairs backdoors in prompt-tuned vision-language models by inverting triggers using out-of-distribution data.
Key contributions
- Detects model-level backdoors in prompt-tuned CLIP models, which existing encoder-focused methods miss.
- Reconstructs potential triggers using unlabeled out-of-distribution images to identify backdoor behavior.
- Achieves 94% detection accuracy across diverse datasets and attacks with only 1,000 OOD images.
- Enables post-hoc repair by using reconstructed triggers to fine-tune and reduce backdoor effectiveness.
Why it matters
As MLaaS becomes common for VLM adaptation, ensuring model security is crucial. This paper provides a novel model-level verification method, CLIP-Inspector, to detect and even repair backdoors in prompt-tuned CLIP models. This protects organizations from malicious providers and ensures safe deployment of AI.
Original Abstract
Organisations with limited data and computational resources increasingly outsource model training to Machine Learning as a Service (MLaaS) providers, who adapt vision-language models (VLMs) such as CLIP to downstream tasks via prompt tuning rather than training from scratch. This semi-honest setting creates a security risk where a malicious provider can follow the prompt-tuning protocol yet implant a backdoor, forcing triggered inputs to be classified into an attacker-chosen class, even for out-of-distribution (OOD) data. Such backdoors leave encoders untouched, making them undetectable to existing methods that focus on encoder corruption. Other data-level methods that sanitize data before training or during inference, also fail to answer the critical question, "Is the delivered model backdoored or not?" To address this model-level verification problem, we introduce CLIP-Inspector (CI), a backdoor detection method designed for prompt-tuned CLIP models. Assuming white-box access to the delivered model and a pool of unlabeled OOD images, CI reconstructs possible triggers for each class to determine if the model exhibits backdoor behaviour or not. Additionally, we demonstrate that using CI's reconstructed trigger for fine-tuning on correctly labeled triggered inputs enables us to re-align the model and reduce backdoor effectiveness. Through extensive experiments across ten datasets and four backdoor attacks, we demonstrate that CI can reconstruct effective triggers in a single epoch using only 1,000 OOD images, achieving a 94% detection accuracy (47/50 models). Compared to adapted trigger-inversion baselines, CI yields a markedly higher AUROC score (0.973 vs 0.495/0.687), thus enabling the vetting and post-hoc repair of prompt-tuned CLIP models to ensure safe deployment.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.