ORCE: Order-Aware Alignment of Verbalized Confidence in Large Language Models
Chen Li, Xiaoling Hu, Songzhu Zheng, Jiawei Zhou, Chao Chen
TLDR
ORCE improves LLM verbalized confidence by decoupling its estimation from answer generation and using rank-based optimization for better calibration.
Key contributions
- Decouples verbalized confidence estimation from answer generation to preserve answer accuracy.
- Estimates confidence *after* answer generation, conditioned on the question-answer pair.
- Uses a sampling-based surrogate and rank-based RL to align confidence with correctness likelihood.
- Significantly improves calibration and failure prediction without sacrificing answer accuracy.
Why it matters
Reliable confidence estimation is vital for deploying LLMs. This paper offers a novel approach that enhances the trustworthiness of LLM outputs by making their verbalized confidence more accurate. It ensures models know when they don't know, which is crucial for real-world applications.
Original Abstract
Large language models (LLMs) often produce answers with high certainty even when they are incorrect, making reliable confidence estimation essential for deployment in real-world scenarios. Verbalized confidence, where models explicitly state their confidence in natural language, provides a flexible and user-facing uncertainty signal that can be applied even when token logits are unavailable. However, existing verbalized-confidence methods often optimize answer generation and confidence generation jointly, which can cause confidence-alignment objectives to interfere with answer accuracy. In this work, we propose a decoupled and order-aware framework for verbalized confidence calibration. Our method first generates an answer and then estimates confidence conditioned on the fixed question--answer pair, allowing confidence optimization without directly perturbing the answer-generation process. To align confidence with correctness likelihood, we construct a sampling-based surrogate from multiple model completions and optimize rank-based reinforcement learning objectives that encourage responses with higher estimated correctness likelihood to receive higher verbalized confidence. Experiments on reasoning and knowledge-intensive benchmarks show that our method improves calibration and failure prediction performance while largely preserving answer accuracy. These results demonstrate that verbalized confidence can be more reliably aligned by decoupling confidence estimation from answer generation and optimizing the relative ordering of confidence across responses.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.