What Drives Representation Steering? A Mechanistic Case Study on Steering Refusal
Stephen Cheng, Sarah Wiegreffe, Dinesh Manocha
TLDR
This paper mechanistically investigates LLM representation steering, revealing that steering vectors primarily interact with the attention OV circuit and can be highly sparsified.
Key contributions
- Proposes a multi-token activation patching framework to study steering mechanisms.
- Discovers steering vectors primarily interact with the attention OV circuit, largely ignoring QK.
- Shows steering vectors can be sparsified by 90-99% while retaining most performance.
- Identifies functionally interchangeable circuits leveraged by different steering methodologies.
Why it matters
Understanding how steering vectors work internally is crucial for effective and reliable LLM alignment. This research provides key mechanistic insights, showing how steering interacts with attention and enabling significant sparsification. These findings improve interpretability and efficiency for LLM control.
Original Abstract
Applying steering vectors to large language models (LLMs) is an efficient and effective model alignment technique, but we lack an interpretable explanation for how it works-- specifically, what internal mechanisms steering vectors affect and how this results in different model outputs. To investigate the causal mechanisms underlying the effectiveness of steering vectors, we conduct a comprehensive case study on refusal. We propose a multi-token activation patching framework and discover that different steering methodologies leverage functionally interchangeable circuits when applied at the same layer. These circuits reveal that steering vectors primarily interact with the attention mechanism through the OV circuit while largely ignoring the QK circuit-- freezing all attention scores during steering drops performance by only 8.75% across two model families. A mathematical decomposition of the steered OV circuit further reveals semantically interpretable concepts, even in cases where the steering vector itself does not. Leveraging the activation patching results, we show that steering vectors can be sparsified by up to 90-99% while retaining most performance, and that different steering methodologies agree on a subset of important dimensions.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.