Detecting Safety Violations Across Many Agent Traces
Adam Stein, Davis Brown, Hamed Hassani, Mayur Naik, Eric Wong
TLDR
Meerkat uses clustering and agentic search to detect rare, complex safety violations across many agent traces, outperforming existing methods.
Key contributions
- Combines clustering and agentic search to detect natural language-specified safety violations.
- Effectively finds rare, complex failures visible only when analyzing multiple agent traces together.
- Significantly improves detection of misuse, misalignment, and task gaming over baselines.
- Uncovered widespread developer cheating and 4x more reward hacking on CyBench.
Why it matters
Meerkat offers a crucial solution for AI safety by efficiently detecting rare, complex violations across many agent traces. This improves the trustworthiness and robustness of AI systems, uncovering hidden failures that traditional methods miss, as demonstrated by its real-world audit successes.
Original Abstract
To identify safety violations, auditors often search over large sets of agent traces. This search is difficult because failures are often rare, complex, and sometimes even adversarially hidden and only detectable when multiple traces are analyzed together. These challenges arise in diverse settings such as misuse campaigns, covert sabotage, reward hacking, and prompt injection. Existing approaches struggle here for several reasons. Per-trace judges miss failures that only become visible across traces, naive agentic auditing does not scale to large trace collections, and fixed monitors are brittle to unanticipated behaviors. We introduce Meerkat, which combines clustering with agentic search to uncover violations specified in natural language. Through structured search and adaptive investigation of promising regions, Meerkat finds sparse failures without relying on seed scenarios, fixed workflows, or exhaustive enumeration. Across misuse, misalignment, and task gaming settings, Meerkat significantly improves detection of safety violations over baseline monitors, discovers widespread developer cheating on a top agent benchmark, and finds nearly 4x more examples of reward hacking on CyBench than previous audits.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.