ArXiv TLDR

Towards Identification and Intervention of Safety-Critical Parameters in Large Language Models

🐦 Tweet
2604.08297

Weiwei Qi, Zefeng Wu, Tianhang Zheng, Zikang Zhang, Xiaojun Jia + 2 more

cs.CR

TLDR

This paper introduces the Expected Safety Impact (ESI) framework to identify safety-critical parameters in LLMs and proposes targeted intervention methods.

Key contributions

  • Introduces Expected Safety Impact (ESI) framework to quantify how parameters affect LLM safety.
  • Reveals distinct safety-critical parameter patterns in dense (V/MLP middle) vs. MoE (late-layer MLP) LLMs.
  • Proposes Safety Enhancement Tuning (SET) to align unsafe LLMs by updating only a few critical parameters.
  • Introduces Safety Preserving Adaptation (SPA) to maintain LLM safety during capability-oriented fine-tuning.

Why it matters

This paper addresses the critical need for understanding LLM safety mechanisms by quantifying parameter impact. It enables precise, efficient safety interventions, enhancing safety in unaligned models and preserving it during capability-oriented fine-tuning. This advances reliable and controllable LLM safety development.

Original Abstract

Ensuring Large Language Model (LLM) safety is crucial, yet the lack of a clear understanding about safety mechanisms hinders the development of precise and reliable methodologies for safety intervention across diverse tasks. To better understand and control LLM safety, we propose the Expected Safety Impact (ESI) framework for quantifying how different parameters affect LLM safety. Based on ESI, we reveal distinct safety-critical patterns across different LLM architectures: In dense LLMs, many safety-critical parameters are located in value matrices (V) and MLPs in middle layers, whereas in Mixture-of-Experts (MoE) models, they shift to the late-layer MLPs. Leveraging ESI, we further introduce two targeted intervention paradigms for safety enhancement and preservation, i.e., Safety Enhancement Tuning (SET) and Safety Preserving Adaptation (SPA). SET can align unsafe LLMs by updating only a few safety-critical parameters, effectively enhancing safety while preserving original performance. SPA safeguards well-aligned LLMs during capability-oriented intervention (e.g., instruction tuning) by preventing disruption of safety-critical weights, allowing the LLM to acquire new abilities and maintain safety capabilities. Extensive evaluations on different LLMs demonstrate that SET can reduce the attack success rates of unaligned LLMs by over 50% with only a 100-iteration update on 1% of model weights. SPA can limit the safety degradation of aligned LLMs within 1% after a 1,000-iteration instruction fine-tuning on different tasks. Our code is available at: https://github.com/ZJU-LLM-Safety/SafeWeights-ACL.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.