ArXiv TLDR

MultEval: Supporting Collaborative Alignment for LLM-as-a-Judge Evaluation Criteria

🐦 Tweet
2604.26679

Charles Chiang, Simret Gebreegziabher, Annalisa Szymanski, Yukun Yang, Hyo Jin Do + 4 more

cs.HC

TLDR

MultEval is a system designed to help multiple stakeholders collaboratively create, refine, and align evaluation criteria for LLM-as-a-judge systems.

Key contributions

  • Formative study revealed challenges in collaboratively defining LLM-as-a-judge evaluation criteria.
  • Introduces MultEval, a system for collaborative criteria authoring using consensus-building theory.
  • MultEval enables surfacing disagreements, iterative revision with examples, and transparent judgment encoding.
  • Case study shows how expert teams used MultEval to coordinate and achieve consensus on criteria.

Why it matters

LLM-as-a-judge systems are powerful, but their evaluation criteria often embed individual biases. This paper introduces MultEval, a crucial tool that enables collaborative, transparent, and consensus-driven criteria development. It enhances the fairness and robustness of LLM evaluations by aligning diverse stakeholder perspectives.

Original Abstract

LLM-as-a-judge approaches have emerged as a scalable solution for evaluating model behaviors, yet they rely on evaluation criteria often created by a single individual, embedding that person's assumptions, priorities, and interpretive lens. In practice, defining such criteria is a collaborative and contested process involving multiple stakeholders with different values, interpretations, and priorities; an aspect largely unsupported by existing tools. To examine this problem in depth, we present a formative study examining how stakeholders collaboratively create, negotiate, and refine evaluation criteria for LLM-as-a-judge systems. Our findings reveal challenges in human oversight, including difficulties in establishing shared understanding, aligning values across stakeholders with different expertise and priorities, and translating nuanced human judgments into criteria that are interpretable and actionable for LLM judges. Based on these insights, we developed MultEval, a system that supports collaborative criteria by enabling multiple evaluators to surface and diagnose disagreements using consensus-building theory, iteratively revise criteria with attached examples and proposal history, and maintain transparency over how judgments are encoded into an automated evaluator. We further report a case study in which a team of domain experts used MultEval to collaboratively author criteria, illustrating how coordination and collaborative consensus-making shape criteria evolution.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.