ArXiv TLDR

Knowing When to Quit: A Principled Framework for Dynamic Abstention in LLM Reasoning

🐦 Tweet
2604.18419

Hen Davidov, Nachshon Cohen, Oren Kalinsky, Yaron Fairstein, Guy Kushilevitz + 2 more

cs.LGcs.CLstat.ML

TLDR

This paper introduces a principled RL framework for dynamic LLM abstention, reducing wasted compute by terminating incorrect reasoning early.

Key contributions

  • Formalizes dynamic LLM abstention as an action within a regularized reinforcement learning framework.
  • Proves abstaining when the value function drops below a reward parameter strictly outperforms baselines.
  • Derives a principled and efficient method to approximate the value function for practical application.
  • Demonstrates improved selective accuracy on mathematical reasoning and toxicity avoidance tasks.

Why it matters

LLMs often waste resources on incorrect reasoning, leading to high compute costs. This paper provides a principled, dynamic abstention method that significantly reduces this waste. It offers a theoretical foundation and practical approximation for smarter, more efficient LLM operation.

Original Abstract

Large language models (LLMs) using chain-of-thought reasoning often waste substantial compute by producing long, incorrect responses. Abstention can mitigate this by withholding outputs unlikely to be correct. While most abstention methods decide to withhold outputs before or after generation, dynamic mid-generation abstention considers early termination of unpromising reasoning traces at each token position. Prior work has explored empirical variants of this idea, but principled guidance for the abstention rule remains lacking. We present a formal analysis of dynamic abstention for LLMs, modeling abstention as an explicit action within a regularized reinforcement learning framework. An abstention reward parameter controls the trade-off between compute and information. We show that abstaining when the value function falls below this reward strictly outperforms natural baselines under general conditions. We further derive a principled and efficient method to approximate the value function. Empirical results on mathematical reasoning and toxicity avoidance tasks support our theory and demonstrate improved selective accuracy over existing methods.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.