VLM Judges Can Rank but Cannot Score: Task-Dependent Uncertainty in Multimodal Evaluation
Divake Kumar, Sina Tayebati, Devashri Naik, Ranganath Krishnan, Amit Ranjan Trivedi
TLDR
VLMs can rank but not reliably score multimodal systems due to task-dependent uncertainty, addressed by conformal prediction.
Key contributions
- Applies conformal prediction to quantify VLM judge uncertainty without retraining.
- Shows VLM uncertainty is task-dependent, low for aesthetics, high for charts/math.
- Uncovers "ranking-scoring decoupling": VLMs rank well but give unreliable absolute scores.
- Demonstrates interval width is driven by task difficulty and annotation quality.
Why it matters
This paper highlights a critical limitation of VLMs as evaluators, showing their scores often lack reliability despite good ranking ability. By introducing a method to quantify this uncertainty, it provides a crucial tool for more trustworthy multimodal evaluation. This work offers a reliability map for VLM judges, guiding their appropriate use.
Original Abstract
Vision-language models (VLMs) are increasingly used as automated judges for multimodal systems, yet their scores provide no indication of reliability. We study this problem through conformal prediction, a distribution-free framework that converts a judge's point score into a calibrated prediction interval using only score-token log-probabilities, with no retraining. We present the first systematic analysis of conformal prediction for VLM-as-a-Judge across 3 judges and 14 visual task categories. Our results show that evaluation uncertainty is strongly task-dependent: intervals cover ~40% of the score range for aesthetics and natural images but expand to ~70% for chart and mathematical reasoning, yielding a quantitative reliability map for multimodal evaluation. We further identify a failure mode not captured by standard evaluation metrics, ranking-scoring decoupling, where judges achieve high ranking correlation while producing wide, uninformative intervals, correctly ordering responses but failing to assign reliable absolute scores. Finally, we show that interval width is driven primarily by task difficulty and annotation quality, i.e., the same judge and method yield 4.5x narrower intervals on a clean, multi-annotator captioning benchmark. Code: https://github.com/divake/VLM-Judge-Uncertainty
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.