Beyond Output Correctness: Benchmarking and Evaluating Large Language Model Reasoning in Coding Tasks
Yuangang Li, Justin Tian Jin Chen, Ethan Yu, David Hong, Iftekhar Ahmed
TLDR
This paper introduces CodeRQ-Bench, a new benchmark for evaluating LLM coding reasoning, and VERA, an evaluator that improves performance across coding tasks.
Key contributions
- Introduces CodeRQ-Bench, the first benchmark for LLM reasoning across code generation, summarization, and classification.
- Analyzes 1,069 cases to identify 5 limitations and 4 design insights for coding reasoning evaluation.
- Proposes VERA, a two-stage evaluator combining evidence-grounded verification with ambiguity-aware score correction.
- VERA consistently outperforms strong baselines, improving AUCROC by up to 0.26 and AUPRC by up to 0.21.
Why it matters
Evaluating LLM reasoning in coding is crucial. This paper addresses a significant gap by introducing CodeRQ-Bench, a comprehensive benchmark, and VERA, a novel evaluator. These tools and insights will help develop more robust and reliable coding LLMs by focusing on reasoning quality, not just output.
Original Abstract
Large language models (LLMs) increasingly rely on explicit reasoning to solve coding tasks, yet evaluating the quality of this reasoning remains challenging. Existing reasoning evaluators are not designed for coding, and current benchmarks focus primarily on code generation, leaving other coding tasks largely unexplored. We introduce CodeRQ-Bench, the first benchmark for evaluating LLM reasoning quality across three coding task categories: generation, summarization, and classification. Using this benchmark, we analyze 1,069 mismatch cases from existing evaluators, identify five recurring limitations, and derive four design insights for reasoning evaluation in coding tasks. Guided by these insights, we propose VERA, a two-stage evaluator that combines evidence-grounded verification with ambiguity-aware score correction. Experiments on CodeRQ-Bench show that VERA consistently outperforms strong baselines across four datasets, improving AUCROC by up to 0.26 and AUPRC by up to 0.21. We release CodeRQ-Bench at https://github.com/MrLYG/CodeRQ-Bench, supporting future investigations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.