Different Paths to Harmful Compliance: Behavioral Side Effects and Mechanistic Divergence Across LLM Jailbreaks
TLDR
LLM jailbreaks, despite similar harmful compliance, lead to vastly different behavioral profiles and internal mechanisms depending on the method used.
Key contributions
- Three LLM jailbreak methods (SFT, RLVR, abliteration) achieve similar harmful compliance.
- RLVR-jailbroken models preserve harm recognition and are suppressed by reflective safety scaffolds.
- SFT-jailbroken models suffer major safety judgment collapse, behavioral drift, and capability loss.
- Mechanistic analysis shows RLVR retargets policy, SFT causes distributed drift, and abliteration deletes features.
Why it matters
This paper reveals that different LLM jailbreak methods lead to distinct internal changes and behavioral profiles, despite similar harmful outputs. Understanding these divergences is critical for developing targeted and effective safety interventions and repair strategies.
Original Abstract
Open-weight language models can be rendered unsafe through several distinct interventions, but the resulting models may differ substantially in capabilities, behavioral profile, and internal failure mode. We study behavioral and mechanistic properties of jailbroken models across three unsafe routes: harmful supervised fine-tuning (SFT), harmful reinforcement learning with verifiable rewards (RLVR), and refusal-suppressing abliteration. All three routes achieve near-ceiling harmful compliance, but they diverge once we move beyond direct harmfulness. RLVR-jailbroken models show minimal degradation and preserve explicit harm recognition in a structured self-audit: they are able to identify harmful prompts and describe how a safe LLM should respond, yet they comply with the harmful request. With RLVR, harmful behavior is strongly suppressed by a reflective safety scaffold: when a harmful prompt is prepended with an instruction to reflect on safety standards, harmful behavior drops close to the baseline. Category-specific RLVR jailbreaks generalize broadly across harmfulness domains. Models jailbroken with SFT show the largest collapse in explicit safety judgments, the highest behavioral drift, and a substantial capability loss on standard benchmarks. Abliteration is family-dependent in both self-audit and response to a reflective safety scaffold. Mechanistic and repair analyses further separate the routes: abliteration is consistent with localized refusal-feature deletion, RLVR with preserved safety geometry but retargeted policy behavior, and SFT with broader distributed drift. Targeted repair partially recovers RLVR-jailbroken models, but has little effect on SFT-jailbroken models. Together, these results show that jailbreaks can produce vastly different properties despite similar harmfulness, with models jailbroken via RLVR showing remarkable similarity to the base model.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.