LLM Safety From Within: Detecting Harmful Content with Internal Representations
Difan Jiao, Yilun Liu, Ye Yuan, Zhenwei Tang, Linfeng Du + 2 more
TLDR
SIREN is a lightweight guard model that detects harmful content by leveraging LLM's internal representations, outperforming current models.
Key contributions
- Detects harmful content by harnessing internal LLM features, not just terminal layers.
- Substantially outperforms state-of-the-art open-source guard models on benchmarks.
- Achieves superior performance with 250 times fewer trainable parameters.
- Enables real-time streaming detection and significantly improves inference efficiency.
Why it matters
This paper introduces SIREN, a novel approach to LLM safety that significantly improves detection accuracy and efficiency. By tapping into LLM's internal states, it offers a more robust and scalable solution for content moderation. This advancement is crucial for deploying safer and more responsible AI systems.
Original Abstract
Guard models are widely used to detect harmful content in user prompts and LLM responses. However, state-of-the-art guard models rely solely on terminal-layer representations and overlook the rich safety-relevant features distributed across internal layers. We present SIREN, a lightweight guard model that harnesses these internal features. By identifying safety neurons via linear probing and combining them through an adaptive layer-weighted strategy, SIREN builds a harmfulness detector from LLM internals without modifying the underlying model. Our comprehensive evaluation shows that SIREN substantially outperforms state-of-the-art open-source guard models across multiple benchmarks while using 250 times fewer trainable parameters. Moreover, SIREN exhibits superior generalization to unseen benchmarks, naturally enables real-time streaming detection, and significantly improves inference efficiency compared to generative guard models. Overall, our results highlight LLM internal states as a promising foundation for practical, high-performance harmfulness detection.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.