ArXiv TLDR

Growing Pains: Extensible and Efficient LLM Benchmarking Via Fixed Parameter Calibration

🐦 Tweet
2604.12843

Eliya Habba, Itay Itzhak, Asaf Yehudai, Yotam Perlitz, Elron Bandel + 3 more

cs.CL

TLDR

This paper introduces an IRT-based framework for extensible and efficient LLM benchmarking, using anchor items to ensure score comparability over time.

Key contributions

  • Addresses the challenge of costly and incomparable LLM evaluations.
  • Proposes a multidimensional Item Response Theory (IRT) framework.
  • Uses fixed anchor items to calibrate new benchmarks and maintain score comparability.
  • Predicts full-evaluation performance within 2-3% using just 100 anchor questions.

Why it matters

As LLMs and benchmarks rapidly grow, evaluating every model on every dataset becomes impractical and incomparable. This framework offers a scalable solution, allowing new benchmarks to be added efficiently while ensuring consistent, comparable results across different evaluation periods. This is crucial for tracking model progress reliably.

Original Abstract

The rapid release of both language models and benchmarks makes it increasingly costly to evaluate every model on every dataset. In practice, models are often evaluated on different samples, making scores difficult to compare across studies. To address this, we propose a framework based on multidimensional Item Response Theory (IRT) that uses anchor items to calibrate new benchmarks to the evaluation suite while holding previously calibrated item parameters fixed. Our approach supports a realistic evaluation setting in which datasets are introduced over time and models are evaluated only on the datasets available at the time of evaluation, while a fixed anchor set for each dataset is used so that results from different evaluation periods can be compared directly. In large-scale experiments on more than $400$ models, our framework predicts full-evaluation performance within 2-3 percentage points using only $100$ anchor questions per dataset, with Spearman $ρ\geq 0.9$ for ranking preservation, showing that it is possible to extend benchmark suites over time while preserving score comparability, at a constant evaluation cost per new dataset. Code available at https://github.com/eliyahabba/growing-pains

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.