Personalized Benchmarking: Evaluating LLMs by Individual Preferences
Cristina Garbacea, Heran Wang, Chenhao Tan
TLDR
Aggregate LLM benchmarks fail to capture individual user preferences; this paper proposes personalized evaluation using user query characteristics.
Key contributions
- Current LLM benchmarks average preferences, failing to account for individual user variations.
- Personalized LLM rankings using ELO and Bradley-Terry coefficients show dramatic divergence from aggregate.
- User query characteristics (topics, writing style) significantly influence individual LLM preferences.
- A compact combination of topic and style features can predict user-specific model rankings effectively.
Why it matters
This paper reveals a critical flaw in current LLM evaluation, showing aggregate benchmarks fail to reflect individual user needs. It provides strong evidence for developing personalized evaluation methods, crucial for deploying LLMs effectively. Understanding individual preferences will lead to more aligned and useful AI systems.
Original Abstract
With the rise in capabilities of large language models (LLMs) and their deployment in real-world tasks, evaluating LLM alignment with human preferences has become an important challenge. Current benchmarks average preferences across all users to compute aggregate ratings, overlooking individual user preferences when establishing model rankings. Since users have varying preferences in different contexts, we call for personalized LLM benchmarks that rank models according to individual needs. We compute personalized model rankings using ELO ratings and Bradley-Terry coefficients for 115 active Chatbot Arena users and analyze how user query characteristics (topics and writing style) relate to LLM ranking variations. We demonstrate that individual rankings of LLM models diverge dramatically from aggregate LLM rankings, with Bradley-Terry correlations averaging only $ρ= 0.04$ (57\% of users show near-zero or negative correlation) and ELO ratings showing moderate correlation ($ρ= 0.43$). Through topic modeling and style analysis, we find users exhibit substantial heterogeneity in topical interests and communication styles, influencing their model preferences. We further show that a compact combination of topic and style features provides a useful feature space for predicting user-specific model rankings. Our results provide strong quantitative evidence that aggregate benchmarks fail to capture individual preferences for most users, and highlight the importance of developing personalized benchmarks that rank LLM models according to individual user preferences.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.