ArXiv TLDR

Hidden Failures in Robustness: Why Supervised Uncertainty Quantification Needs Better Evaluation

🐦 Tweet
2604.11662

Joe Stacey, Hadas Orgad, Kentaro Inui, Benjamin Heinzerling, Nafise Sadat Moosavi

cs.CL

TLDR

A systematic study reveals supervised uncertainty probes for LLMs lack robustness, especially OOD, finding input features and aggregation are key drivers.

Key contributions

  • Systematically studied over 2,000 supervised uncertainty probes across models, tasks, and OOD settings.
  • Found current probe methods exhibit poor robustness, particularly for long-form generations and distribution shifts.
  • Probe robustness is driven more by input features (layer, aggregation) than by architectural choices.
  • Middle-layer representations and aggregated token features generalize more reliably under OOD conditions.

Why it matters

This paper highlights critical robustness issues in LLM uncertainty quantification, especially under distribution shift. It provides key insights into designing more reliable probes by focusing on input features and aggregation strategies. This work is crucial for building trustworthy LLM applications.

Original Abstract

Recent work has shown that the hidden states of large language models contain signals useful for uncertainty estimation and hallucination detection, motivating a growing interest in efficient probe-based approaches. Yet it remains unclear how robust existing methods are, and which probe designs provide uncertainty estimates that are reliable under distribution shift. We present a systematic study of supervised uncertainty probes across models, tasks, and OOD settings, training over 2,000 probes while varying the representation layer, feature type, and token aggregation strategy. Our evaluation highlights poor robustness in current methods, particularly in the case of long-form generations. We also find that probe robustness is driven less by architecture and more by the probe inputs. Middle-layer representations generalise more reliably than final-layer hidden states, and aggregating across response tokens is consistently more robust than relying on single-token features. These differences are often largely invisible in-distribution but become more important under distribution shift. Informed by our evaluation, we explore a simple hybrid back-off strategy for improving robustness, arguing that better evaluation is a prerequisite for building more robust probes.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.