Low Rank Adaptation for Adversarial Perturbation
Han Liu, Shanghao Shi, Yevgeniy Vorobeychik, Chongjie Zhang, Ning Zhang
TLDR
This paper reveals adversarial perturbations possess a low-rank structure, leveraging this insight to significantly improve black-box attack efficiency and effectiveness.
Key contributions
- Demonstrates, theoretically and empirically, that adversarial perturbations exhibit an inherent low-rank structure.
- Introduces a two-step method to improve black-box adversarial attacks using this low-rank property.
- Projects gradients into a low-dimensional subspace to confine perturbation search, boosting attack efficiency and effectiveness.
Why it matters
Understanding the low-rank nature of adversarial perturbations opens new avenues for both attacking and defending AI models. This work specifically enhances black-box attacks, which are often query-intensive, making them more practical and effective.
Original Abstract
Low-Rank Adaptation (LoRA), which leverages the insight that model updates typically reside in a low-dimensional space, has significantly improved the training efficiency of Large Language Models (LLMs) by updating neural network layers using low-rank matrices. Since the generation of adversarial examples is an optimization process analogous to model training, this naturally raises the question: Do adversarial perturbations exhibit a similar low-rank structure? In this paper, we provide both theoretical analysis and extensive empirical investigation across various attack methods, model architectures, and datasets to show that adversarial perturbations indeed possess an inherently low-rank structure. This insight opens up new opportunities for improving both adversarial attacks and defenses. We mainly focus on leveraging this low-rank property to improve the efficiency and effectiveness of black-box adversarial attacks, which often suffer from excessive query requirements. Our method follows a two-step approach. First, we use a reference model and auxiliary data to guide the projection of gradients into a low-dimensional subspace. Next, we confine the perturbation search in black-box attacks to this low-rank subspace, significantly improving the efficiency and effectiveness of the adversarial attacks. We evaluated our approach across a range of attack methods, benchmark models, datasets, and threat models. The results demonstrate substantial and consistent improvements in the performance of our low-rank adversarial attacks compared to conventional methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.