ArXiv TLDR

CSC: Turning the Adversary's Poison against Itself

🐦 Tweet
2604.21416

Yuchen Shi, Xin Guo, Huajie Chen, Tianqing Zhu, Bo Liu + 1 more

cs.CRcs.AI

TLDR

CSC neutralizes backdoor attacks by identifying and relabeling poisoned data clusters, achieving near-zero attack success with minimal accuracy loss.

Key contributions

  • Identifies robust backdoor patterns by analyzing early-stage latent space clustering of poisoned samples.
  • Introduces CSC, a defense that segregates and relabels poisoned data to a virtual class, neutralizing backdoors.
  • Achieves near-zero attack success rates against 12 attacks, outperforming 9 SOTA defenses with minimal accuracy loss.

Why it matters

Backdoor attacks pose significant threats to deep learning models, and existing defenses often compromise model utility. This paper offers a robust solution that effectively neutralizes these attacks without sacrificing accuracy. It significantly advances trustworthy AI by providing a practical and highly effective defense.

Original Abstract

Poisoning-based backdoor attacks pose significant threats to deep neural networks by embedding triggers in training data, causing models to misclassify triggered inputs as adversary-specified labels while maintaining performance on clean data. Existing poison restraint-based defenses often suffer from inadequate detection against specific attack variants and compromise model utility through unlearning methods that lead to accuracy degradation. This paper conducts a comprehensive analysis of backdoor attack dynamics during model training, revealing that poisoned samples form isolated clusters in latent space early on, with triggers acting as dominant features distinct from benign ones. Leveraging these insights, we propose Cluster Segregation Concealment (CSC), a novel poison suppression defense. CSC first trains a deep neural network via standard supervised learning while segregating poisoned samples through feature extraction from early epochs, DBSCAN clustering, and identification of anomalous clusters based on class diversity and density metrics. In the concealment stage, identified poisoned samples are relabeled to a virtual class, and the model's classifier is fine-tuned using cross-entropy loss to replace the backdoor association with a benign virtual linkage, preserving overall accuracy. CSC was evaluated on four benchmark datasets against twelve poisoning-based attacks, CSC outperforms nine state-of-the-art defenses by reducing average attack success rates to near zero with minimal clean accuracy loss. Contributions include robust backdoor patterns identification, an effective concealment mechanism, and superior empirical validation, advancing trustworthy artificial intelligence.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.