ArXiv TLDR

When No Benchmark Exists: Validating Comparative LLM Safety Scoring Without Ground-Truth Labels

🐦 Tweet
2605.06652

Sushant Gautam, Finn Schwall, Annika Willoch Olstad, Fernando Vallecillos Ruiz, Birk Torpmann-Hagen + 4 more

cs.LGcs.AIcs.CL

TLDR

This paper introduces a method for validating comparative LLM safety scoring without ground-truth labels, using an instrumental-validity chain.

Key contributions

  • Formalizes "benchmarkless comparative safety scoring" for LLMs.
  • Introduces an instrumental-validity chain for LLM safety validation without ground-truth.
  • Validates the chain with SimpleAudit, achieving AUROC 0.89-1.00 on a Norwegian safety pack.
  • Shows that LLM safety depends on scenario and risk, requiring comprehensive reporting, not single rankings.

Why it matters

This paper addresses a critical challenge: evaluating LLM safety when no specific benchmark exists. It provides a structured, auditable framework for comparative safety scoring, offering a practical solution for organizations. The findings emphasize the complexity of safety, advocating for nuanced reporting beyond simple rankings.

Original Abstract

Many deployments must compare candidate language models for safety before a labeled benchmark exists for the relevant language, sector, or regulatory regime. We formalize this setting as benchmarkless comparative safety scoring and specify the contract under which a scenario-based audit can be interpreted as deployment evidence. Scores are valid only under a fixed scenario pack, rubric, auditor, judge, sampling configuration, and rerun budget. Because no labels are available, we replace ground-truth agreement with an instrumental-validity chain: responsiveness to a controlled safe-versus-abliterated contrast, dominance of target-driven variance over auditor and judge artifacts, and stability across reruns. We instantiate the chain in SimpleAudit, a local-first scoring instrument, and validate it on a Norwegian safety pack. Safe and abliterated targets separate with AUROC values between 0.89 and 1.00, target identity is the dominant variance component ($η^2 \approx 0.52$), and severity profiles stabilize by ten reruns. Applying the same chain to Petri shows that it admits both tools. The substantial differences arise upstream of the chain, in claim-contract enforcement and deployment fit. A Norwegian public-sector procurement case comparing Borealis and Gemma 3 demonstrates the resulting evidence in practice: the safer model depends on scenario category and risk measure. Consequently, scores, matched deltas, critical rates, uncertainty, and the auditor and judge used must be reported together rather than collapsed into a single ranking.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.