Segment-Level Coherence for Robust Harmful Intent Probing in LLMs
Xuanli He, Bilgehan Sel, Faizan Ali, Jenny Bao, Hoagy Cunningham + 1 more
TLDR
A new streaming probing method for LLMs uses segment-level coherence to robustly detect harmful intent, especially in CBRN, reducing false alarms.
Key contributions
- Introduces a streaming probe using segment-level coherence, requiring multiple evidence tokens for robust detection.
- Achieves a 35.55% higher true-positive rate at 1% false-positive rate, with substantial AUROC gains.
- Demonstrates that probing Attention or MLP activations consistently outperforms residual-stream features.
- Detects harmful intent with over 98.85% AUROC, even against novel character-level ciphers and obfuscated attacks.
Why it matters
LLMs face adaptive jailbreaking, especially in high-stakes CBRN domains. This method offers a more robust and reliable way to detect harmful intent by reducing false alarms from sensitive terms. It's crucial for enhancing LLM safety and security.
Original Abstract
Large Language Models (LLMs) are increasingly exposed to adaptive jailbreaking, particularly in high-stakes Chemical, Biological, Radiological, and Nuclear (CBRN) domains. Although streaming probes enable real-time monitoring, they still make systematic errors. We identify a core issue: existing methods often rely on a few high-scoring tokens, leading to false alarms when sensitive CBRN terms appear in benign contexts. To address this, we introduce a streaming probing objective that requires multiple evidence tokens to consistently support a prediction, rather than relying on isolated spikes. This encourages more robust detection based on aggregated signals instead of single-token cues. At a fixed 1% false-positive rate, our method improves the true-positive rate by 35.55% relative to strong streaming baselines. We further observe substantial gains in AUROC, even when starting from near-saturated baseline performance (AUROC = 97.40%). We also show that probing Attention or MLP activations consistently outperforms residual-stream features. Finally, even when adversarial fine-tuning enables novel character-level ciphers, harmful intent remains detectable: probes developed for the base LLMs can be applied ``plug-and-play'' to these obfuscated attacks, achieving an AUROC of over 98.85%.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.