ArXiv TLDR

Rethinking Math Reasoning Evaluation: A Robust LLM-as-a-Judge Framework Beyond Symbolic Rigidity

🐦 Tweet
2604.22597

Erez Yosef, Oron Anschel, Shunit Haviv Hakimi, Asaf Gendler, Adam Botach + 2 more

cs.AI

TLDR

This paper introduces an LLM-as-a-judge framework for evaluating math reasoning, overcoming limitations of symbolic comparison.

Key contributions

  • Proposes an LLM-based framework for evaluating math reasoning.
  • Overcomes limitations of rigid symbolic comparison methods.
  • Accurately evaluates diverse mathematical representations and answer formats.
  • Demonstrates clear improvements over common symbolic evaluation frameworks.

Why it matters

Current symbolic math evaluation methods are rigid and unreliable. This LLM-as-a-judge framework offers a robust alternative, enabling more accurate benchmarking of mathematical reasoning in LLMs. This is crucial for advancing intelligent systems and reliable performance monitoring.

Original Abstract

Recent advancements in large language models have led to significant improvements across various tasks, including mathematical reasoning, which is used to assess models' intelligence in logical reasoning and problem-solving. Models are evaluated on mathematical reasoning benchmarks by verifying the correctness of the final answer against a ground truth answer. A common approach for this verification is based on symbolic mathematics comparison, which fails to generalize across diverse mathematical representations and solution formats. In this work, we offer a robust and flexible alternative to rule-based symbolic mathematics comparison. We propose an LLM-based evaluation framework for evaluating model-generated answers, enabling accurate evaluation across diverse mathematical representations and answer formats. We present failure cases of symbolic evaluation in two popular frameworks, Lighteval and SimpleRL, and compare them to our approach, demonstrating clear improvements over commonly used methods. Our framework enables more reliable evaluation and benchmarking, leading to more accurate performance monitoring, which is important for advancing mathematical problem-solving and intelligent systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.