ArXiv TLDR

Compiling Activation Steering into Weights via Null-Space Constraints for Stealthy Backdoors

🐦 Tweet
2604.12359

Rui Yin, Tianxu Han, Naen Xu, Changjiang Li, Ping He + 6 more

cs.CRcs.CL

TLDR

This paper introduces a method to inject stealthy and reliable backdoors into LLMs by compiling activation steering vectors into model weights via null-space constraints.

Key contributions

  • Injects reliable LLM backdoors by targeting internal representations instead of surface tokens.
  • Compiles a steering vector (compliant vs. refusal) into persistent weight modifications.
  • Ensures stealthiness and utility preservation via null-space constraints on the weight edit.
  • Efficient method with a closed-form solution, requiring only a small set of examples.

Why it matters

This paper is important because it demonstrates a more reliable and stealthy way to create backdoors in safety-aligned LLMs. By targeting internal representations and using null-space constraints, it highlights a significant supply-chain attack surface. This research is crucial for developing robust defenses against such sophisticated attacks.

Original Abstract

Safety-aligned large language models (LLMs) are increasingly deployed in real-world pipelines, yet this deployment also enlarges the supply-chain attack surface: adversaries can distribute backdoored checkpoints that behave normally under standard evaluation but jailbreak when a hidden trigger is present. Recent post-hoc weight-editing methods offer an efficient approach to injecting such backdoors by directly modifying model weights to map a trigger to an attacker-specified response. However, existing methods typically optimize a token-level mapping that forces an affirmative prefix (e.g., ``Sure''), which does not guarantee sustained harmful output -- the model may begin with apparent agreement yet revert to safety-aligned refusal within a few decoding steps. We address this reliability gap by shifting the backdoor objective from surface tokens to internal representations. We extract a steering vector that captures the difference between compliant and refusal behaviors, and compile it into a persistent weight modification that activates only when the trigger is present. To preserve stealthiness and benign utility, we impose a null-space constraint so that the injected edit remains dormant on clean inputs. The method is efficient, requiring only a small set of examples and admitting a closed-form solution. Across multiple safety-aligned LLMs and jailbreak benchmarks, our method achieves high triggered attack success while maintaining non-triggered safety and general utility.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.