CoCoReviewBench: A Completeness- and Correctness-Oriented Benchmark for AI Reviewers
Hexuan Deng, Xiaopeng Ke, Yichen Li, Ruina Hu, Dehao Huang + 4 more
TLDR
CoCoReviewBench is a new benchmark for AI reviewers, focusing on completeness and correctness by curating 3,900 papers with expert annotations.
Key contributions
- Introduces CoCoReviewBench, a new benchmark for AI reviewers with 3,900 papers from ICLR and NeurIPS.
- Strengthens completeness using category-specific subsets and skipping evaluations for missing human reviews.
- Enhances correctness by leveraging reviewer-author-meta-review discussions as expert annotations.
- Analysis shows AI reviewers are limited in correctness and prone to hallucinations, favoring reasoning models.
Why it matters
Current AI reviewer evaluation is hampered by unreliable human reviews. CoCoReviewBench addresses this by offering a robust, completeness- and correctness-oriented benchmark. It provides a more reliable assessment of AI reviewer performance, revealing limitations and guiding future development towards more effective models.
Original Abstract
Despite the rapid development of AI reviewers, evaluating such systems remains challenging: metrics favor overlap with human reviews over correctness. However, since human reviews often cover only a subset of salient issues and sometimes contain mistakes, they are unreliable as gold references. To address this, we build category-specific benchmark subsets and skip evaluation when the corresponding human reviews are missing to strengthen Completeness. We also leverage reviewer--author--meta-review discussions as expert annotations and filter unreliable reviews accordingly to strengthen Correctness. Finally, we introduce CoCoReviewBench, which curates 3,900 papers from ICLR and NeurIPS to enable reliable and fine-grained evaluation of AI reviewers. Analysis shows that AI reviewers remain limited in correctness and are prone to hallucinations, and highlights reasoning models as more effective reviewers, motivating further directions for improving AI reviewers. Benchmarks and models are available at https://github.com/hexuandeng/CoCoReviewBench.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.