Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning
Cai Zhou, Zekai Wang, Menghua Wu, Qianyu Julie Zhu, Flora C. Shi + 4 more
TLDR
ORCA uses test-time training and conformal prediction to calibrate LLM reasoning, significantly boosting efficiency and generalization while providing valid confidence.
Key contributions
- Introduces Online Reasoning Calibration (ORCA) for LLM sampling, combining conformal prediction and test-time training.
- Employs a meta-learning procedure to adapt calibration for each input, ensuring valid confidence under distributional shifts.
- Achieves significant efficiency gains (up to 67% on MATH-500) and improved generalization across reasoning tasks.
- Provides theoretical guarantees on conformal risks while maintaining low empirical error rates.
Why it matters
Current LLM test-time scaling is costly due to miscalibration. ORCA addresses this by dynamically calibrating LLM reasoning, making advanced LLM capabilities more accessible and efficient. This work offers a path to more generalizable and cost-effective deployment of powerful language models.
Original Abstract
While test-time scaling has enabled large language models to solve highly difficult tasks, state-of-the-art results come at exorbitant compute costs. These inefficiencies can be attributed to the miscalibration of post-trained language models, and the lack of calibration in popular sampling techniques. Here, we present Online Reasoning Calibration (ORCA), a framework for calibrating the sampling process that draws upon conformal prediction and test-time training. Specifically, we introduce a meta-learning procedure that updates the calibration module for each input. This allows us to provide valid confidence estimates under distributional shift, e.g. in thought patterns that occur across different stages of reasoning, or in prompt distributions between model development and deployment. ORCA not only provides theoretical guarantees on conformal risks, but also empirically shows higher efficiency and generalization across different reasoning tasks. At risk level $δ=0.1$, ORCA improves Qwen2.5-32B efficiency on in-distribution tasks with savings up to 47.5% with supervised labels and 40.7% with self-consistency labels. Under zero-shot out-of-domain settings, it improves MATH-500 savings from 24.8% of the static calibration baseline to 67.0% while maintaining a low empirical error rate, and the same trend holds across model families and downstream benchmarks. Our code is publicly available at https://github.com/wzekai99/ORCA.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.