Attention Is Where You Attack
Aviral Srivastava, Sourav Panda
TLDR
A new white-box attack, Attention Redistribution Attack (ARA), bypasses LLM safety by redirecting attention in critical heads with few nonsemantic tokens.
Key contributions
- Introduces Attention Redistribution Attack (ARA), a white-box method to bypass LLM safety alignment.
- ARA crafts nonsemantic tokens to redirect attention from safety-critical positions using Gumbel-softmax.
- Achieves 36% ASR on Mistral-7B and 30% on LLaMA-3 with as few as 5 tokens and 500 steps.
- Reveals safety emerges from attention routing, not localized heads; redistribution is key, not ablation.
Why it matters
This paper introduces a novel and effective method for jailbreaking LLMs by targeting their internal attention mechanisms, rather than semantic content. It reveals a critical insight into how safety alignment functions, suggesting that safety is an emergent property of attention routing. This work highlights a new vulnerability and direction for robust safety research.
Original Abstract
Safety-aligned large language models rely on RLHF and instruction tuning to refuse harmful requests, yet the internal mechanisms implementing safety behavior remain poorly understood. We introduce the Attention Redistribution Attack (ARA), a white-box adversarial attack that identifies safety-critical attention heads and crafts nonsemantic adversarial tokens that redirect attention away from safety-relevant positions. Unlike prior jailbreak methods operating at the semantic or output-logit level, ARA targets the geometry of softmax attention on the probability simplex using Gumbel-softmax optimization over targeted heads. Across LLaMA-3-8B-Instruct, Mistral-7B-Instruct-v0.1, and Gemma-2-9B-it, ARA bypasses safety alignment with as few as 5 tokens and 500 optimization steps, achieving 36% ASR on Mistral-7B and 30% on LLaMA-3 against 200 HarmBench prompts, while Gemma-2 remains at 1%. Our principal mechanistic finding is a dissociation between ablation and redistribution: zeroing out the top-ranked safety heads produces at most 1 flip among 39 to 50 baseline refusals, while ARA targeting the corresponding safety-heavy layers flips 72/200 prompts on Mistral-7B and 60/200 on LLaMA-3. This suggests that safety is not localized in these heads as removable components, but emerges from the attention routing they perform. Removing a head allows compensation through the residual stream, while redirecting its attention propagates a corrupted signal downstream.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.