ArXiv TLDR

FUSE: Ensembling Verifiers with Zero Labeled Data

🐦 Tweet
2604.18547

Joonhyuk Lee, Virginia Ma, Sarah Zhao, Yash Nair, Asher Spector + 2 more

stat.MLcs.CLcs.LG

TLDR

FUSE improves LLM output verification by ensembling imperfect verifiers without ground truth labels, outperforming semi-supervised methods.

Key contributions

  • Introduces FUSE, a method for ensembling LLM verifiers without needing ground truth labels.
  • Controls conditional dependencies between verifiers to enhance unsupervised spectral algorithm performance.
  • Achieves verification quality comparable to or better than semi-supervised alternatives.
  • Validated across diverse benchmarks including GPQA Diamond and frontier exams like Humanity's Last Exam.

Why it matters

This paper matters because it addresses a critical challenge in LLM development: reliable output verification without costly human labels. FUSE offers a practical, unsupervised solution that matches or exceeds current semi-supervised methods. This could significantly accelerate LLM deployment and training.

Original Abstract

Verification of model outputs is rapidly emerging as a key primitive for both training and real-world deployment of large language models (LLMs). In practice, this often involves using imperfect LLM judges and reward models since ground truth acquisition can be time-consuming and expensive. We introduce Fully Unsupervised Score Ensembling (FUSE), a method for improving verification quality by ensembling verifiers without access to ground truth correctness labels. The key idea behind FUSE is to control conditional dependencies between verifiers in a manner that improves the unsupervised performance of a class of spectral algorithms from the ensembling literature. Despite requiring zero ground truth labels, FUSE typically matches or improves upon semi-supervised alternatives in test-time scaling experiments with diverse sets of generator models, verifiers, and benchmarks. In particular, we validate our method on both conventional academic benchmarks such as GPQA Diamond and on frontier, unsaturated benchmarks such as Humanity's Last Exam and IMO Shortlist questions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.