ArXiv TLDR

BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation

🐦 Tweet
2604.09497

Hippolyte Gisserot-Boukhlef, Nicolas Boizard, Emmanuel Malherbe, Céline Hudelot, Pierre Colombo

cs.CLcs.AI

TLDR

BERT-as-a-Judge offers an efficient, robust, encoder-driven alternative to lexical methods for evaluating LLMs, matching larger judges' performance.

Key contributions

  • Systematically investigates lexical evaluation limitations, showing poor correlation with human judgment.
  • Introduces BERT-as-a-Judge, an encoder-driven method robust to phrasing variations for reference-based LLM evaluation.
  • Matches larger LLM judges' performance while outperforming lexical baselines, enabling scalable and reliable evaluation.

Why it matters

Accurate LLM evaluation is crucial but existing methods are either rigid and inaccurate or computationally expensive. BERT-as-a-Judge offers an efficient, robust, and scalable alternative, making high-quality LLM evaluation accessible for practitioners.

Original Abstract

Accurate evaluation is central to the large language model (LLM) ecosystem, guiding model selection and downstream adoption across diverse use cases. In practice, however, evaluating generative outputs typically relies on rigid lexical methods to extract and assess answers, which can conflate a model's true problem-solving ability with its compliance with predefined formatting guidelines. While recent LLM-as-a-Judge approaches mitigate this issue by assessing semantic correctness rather than strict structural conformity, they also introduce substantial computational overhead, making evaluation costly. In this work, we first systematically investigate the limitations of lexical evaluation through a large-scale empirical study spanning 36 models and 15 downstream tasks, demonstrating that such methods correlate poorly with human judgments. To address this limitation, we introduce BERT-as-a-Judge, an encoder-driven approach for assessing answer correctness in reference-based generative settings, robust to variations in output phrasing, and requiring only lightweight training on synthetically annotated question-candidate-reference triplets. We show that it consistently outperforms the lexical baseline while matching the performance of much larger LLM judges, providing a compelling tradeoff between the two and enabling reliable, scalable evaluation. Finally, through extensive experimentation, we provide detailed insights into BERT-as-a-Judge's performance to offer practical guidance for practitioners, and release all project artifacts to foster downstream adoption.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.