Automatically Finding and Validating Unexpected Side-Effects of Interventions on Language Models
Quintin Pope, Ajay Hayagreeve Balaji, Jacques Thibodeau, Xiaoli Fern
TLDR
This paper introduces an automated pipeline to find and validate unexpected side-effects of interventions on large language models.
Key contributions
- Automated, contrastive evaluation pipeline for auditing LLM intervention impacts.
- Compares multi-token generations from base and intervened models across aligned prompts.
- Generates human-readable, statistically validated hypotheses on model behavioral differences.
- Successfully identifies both intended and unexpected behavioral shifts in real-world LLM interventions.
Why it matters
This pipeline offers a crucial tool for understanding the true impact of LLM interventions. By automatically surfacing both intended and unexpected behavioral changes, it enhances transparency and trustworthiness in model development. This helps developers prevent unintended consequences and build more robust, reliable AI systems.
Original Abstract
We present an automated, contrastive evaluation pipeline for auditing the behavioral impact of interventions on large language models. Given a base model $M_1$ and an intervention model $M_2$, our method compares their free-form, multi-token generations across aligned prompt contexts and produces human-readable, statistically validated natural-language hypotheses describing how the models differ, along with recurring themes that summarize patterns across validated hypotheses. We evaluate the approach in synthetic setting by injecting known behavioral changes and showing that the pipeline reliably recovers them. We then apply it to three real-world interventions, reasoning distillation, knowledge editing and unlearning, demonstrating that the method surfaces both intended and unexpected behavioral shifts, distinguishes large from subtle interventions, and does not hallucinate differences when effects are absent or misaligned with the prompt bank. Overall, the pipeline provides a statistically grounded and interpretable tool for post-hoc auditing of intervention-induced changes in model behavior.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.