Beyond Benchmarks: MathArena as an Evaluation Platform for Mathematics with LLMs
Jasper Dekoninck, Nikola Jovanović, Tim Gehrunger, Kári Rögnvalddson, Ivo Petrov + 2 more
TLDR
MathArena is now an expanded, continuously updated evaluation platform for LLMs in math, covering diverse tasks from olympiads to formal proofs.
Key contributions
- Transforms MathArena into a continuously maintained evaluation platform for LLM mathematical reasoning.
- Expands scope to include proof-based competitions, research-level arXiv problems, and formal Lean proofs.
- Establishes a clear evaluation protocol and regularly designs new benchmarks to keep the platform challenging.
- Demonstrates frontier LLMs (GPT-5.5) can solve extremely challenging math problems, e.g., 98% on 2026 USA Math Olympiad.
Why it matters
Static benchmarks fail to keep up with rapid LLM advancements in mathematics. MathArena provides a crucial, dynamic platform for comprehensive and ongoing evaluation across diverse, challenging math tasks. This ensures reliable tracking of progress and fosters future development in mathematical AI.
Original Abstract
Large language models (LLMs) are becoming increasingly capable mathematical collaborators, but static benchmarks are no longer sufficient for evaluating progress: they are often narrow in scope, quickly saturated, and rarely updated. This makes it hard to compare models reliably and track progress over time. Instead, we need evaluation platforms: continuously maintained systems that run, aggregate, and analyze evaluations across many benchmarks to give a comprehensive picture of model performance within a broad domain. In this work, we build on the original MathArena benchmark by substantially broadening its scope from final-answer olympiad problems to a continuously maintained evaluation platform for mathematical reasoning with LLMs. MathArena now covers a much wider range of tasks, including proof-based competitions, research-level arXiv problems, and formal proof generation in Lean. Additionally, we maintain a clear evaluation protocol for all models and regularly design new benchmarks as model capabilities improve to ensure that MathArena remains challenging. Notably, the strongest model, GPT-5.5, now reaches 98% on the 2026 USA Math Olympiad and 74% on research-level questions, showing that frontier models can now comfortably solve extremely challenging mathematical problems. This highlights the importance of continuously maintained evaluation platforms like MathArena to track the rapid progress of LLMs in mathematical reasoning.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.