ArXiv TLDR

Who Defines "Best"? Towards Interactive, User-Defined Evaluation of LLM Leaderboards

🐦 Tweet
2604.21769

Minji Jung, Minjae Lee, Yejin Kim, Sarang Choi, Minsuk Kahng

cs.AIcs.CYcs.HC

TLDR

This paper introduces an interactive interface allowing users to define and explore LLM leaderboard rankings based on their own evaluation priorities.

Key contributions

  • Analyzed LMArena dataset, revealing topic skew and varying model rankings by prompt type.
  • Developed an interactive visualization for user-defined LLM evaluation priorities.
  • Enables users to select and weight prompt slices to customize leaderboard views.
  • Qualitative study shows improved transparency and context-specific model evaluation.

Why it matters

LLM leaderboards are critical but often reflect benchmark designers' biases, not diverse user needs. This work addresses this by empowering users to define "best" for their specific context. It offers a more transparent and flexible approach to evaluating LLMs, moving beyond static, aggregate scores.

Original Abstract

LLM leaderboards are widely used to compare models and guide deployment decisions. However, leaderboard rankings are shaped by evaluation priorities set by benchmark designers, rather than by the diverse goals and constraints of actual users and organizations. A single aggregate score often obscures how models behave across different prompt types and compositions. In this work, we conduct an in-depth analysis of the dataset used in the LMArena (formerly Chatbot Arena) benchmark and investigate this evaluation challenge by designing an interactive visualization interface as a design probe. Our analysis reveals that the dataset is heavily skewed toward certain topics, that model rankings vary across prompt slices, and that preference-based judgments are used in ways that blur their intended scope. Building on this analysis, we introduce a visualization interface that allows users to define their own evaluation priorities by selecting and weighting prompt slices and to explore how rankings change accordingly. A qualitative study suggests that this interactive approach improves transparency and supports more context-specific model evaluation, pointing toward alternative ways to design and use LLM leaderboards.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.