ArXiv TLDR

On Reasoning-Centric LLM-based Automated Theorem Proving

🐦 Tweet
2604.19558

Yican Sun, Chengwei Shi, Hangzhou Lyu, Yingfei Xiong

cs.SE

TLDR

ReCent-Prover enhances automated theorem proving by integrating reasoning-centric LLMs for strategic planning and self-critique, achieving significant performance gains.

Key contributions

  • Introduces "validation with reflection" for LLMs to self-critique generated tactics and filter errors early.
  • Proposes "retrieval with planning" to align lemma retrieval with LLM-generated proof strategies.
  • Achieves a 22.58% relative improvement in proved theorems on the CoqStoq benchmark over state-of-the-art.

Why it matters

This paper introduces a novel reasoning-centric approach for LLM-based automated theorem proving. By integrating strategic planning and self-critique, it significantly enhances proof agent capabilities, making formal methods more robust and efficient.

Original Abstract

Automated theorem proving is fundamental to formal methods, and the recent trend is to integrate large language models (LLMs) and proof assistants to form effective proof agents. While existing proof agents show promising performance, they inadequately leverage reasoning capabilities of modern LLMs in high-level planning and self-critique. We argue that proof agents should not merely generate tactics but also reason strategically about proof plans and critically evaluate their own proposals. This paper introduces ReCent-Prover, a reasoning-centric LLM-based proof agent for Rocq that addresses two critical limitations in current systems. First, we present validation with reflection, enabling LLMs to scrutinize their generated tactics and synthesize failure summaries when reflection identifies potential errors, filtering out potentially misapplied tactics earlier. Second, we propose retrieval with planning, which conditions retrieval on LLM-generated proof plans rather than subgoal similarity, retrieving lemmas and proofs that align with the anticipated proof strategy. Both techniques increase the number of invocations of LLMs. However, when evaluated on the CoqStoq benchmark, even under the same budget of LLM invocations, ReCent-Prover achieves a 22.58% relative improvement in the number of proved theorems over the previous state-of-the-art, demonstrating that our reasoning-centric design significantly enhances automated theorem proving capabilities.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.