ArXiv TLDR

Understanding the Effects of Safety Unalignment on Large Language Models

🐦 Tweet
2604.02574

John T. Halloran

cs.CRcs.AIcs.LG

TLDR

This study reveals how safety unalignment methods impact LLM refusal rates and malicious capabilities, with weight orthogonalization posing greater risks.

Key contributions

  • Compares jailbreak-tuning (JT) and weight orthogonalization (WO) unalignment on six diverse LLMs.
  • WO-unaligned models are more capable of malicious activity, less prone to hallucinations, and retain performance.
  • WO models are more effective at state-of-the-art adversarial and cyber attacks than JT models.
  • Supervised fine-tuning effectively limits WO's adversarial attack abilities without major side effects.

Why it matters

Safety alignment is critical for LLMs, but unalignment methods can disable guardrails. This research highlights that weight orthogonalization poses a significant risk by creating more capable and less hallucinating malicious LLMs. Understanding these effects is vital for developing robust safety measures.

Original Abstract

Safety alignment has become a critical step to ensure LLMs refuse harmful requests while providing helpful and harmless responses. However, despite the ubiquity of safety alignment for deployed frontier models, two separate lines of recent work--jailbreak-tuning (JT) and weight orthogonalization (WO)--have shown that safety guardrails may be largely disabled, resulting in LLMs which comply with harmful requests they would normally refuse. In spite of far-reaching safety implications, analysis has largely been limited to refusal rates of each unalignment method in isolation, leaving their relative effects on adversarial LLM capabilities unknown. To fill this gap, we study the impact of unaligning six popular LLMs of various sizes across a large number of malicious and benign tasks, using both JT and WO. Across the evaluated models, we show that while refusal degradation is split between the two methods, WO produces LLMs far more capable of aiding in malicious activity; in contrast to JT, the majority of WO unaligned models are far less prone to hallucinations, better retain their original natural-language performance, and are more effective at state-of-the-art adversarial and cyber attacks. To thus help mitigate the malicious risks of WO unalignment, we conclude by showing that supervised fine-tuning effectively limits the adversarial attack abilities enabled by WO, without drastically affecting hallucination rates or natural language performance.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.