ASTRA-QA: A Benchmark for Abstract Question Answering over Documents
Shu Wang, Shansong Zhou, Xinyang Wang, Shiwei Wang, Hulong Wu + 1 more
TLDR
ASTRA-QA is a new benchmark for abstract question answering over documents, providing robust evaluation for coverage, hallucination, and retrieval scope.
Key contributions
- Introduces ASTRA-QA, a benchmark for abstract QA over documents, addressing limitations of existing benchmarks.
- Contains 869 QA instances from academic papers and news, covering 5 abstract question types and 3 retrieval scopes.
- Features explicit evaluation annotations (topic sets, unsupported topics, evidence) for scalable, direct scoring.
- Enables robust diagnostics for coverage, hallucination, and retrieval-scope robustness in RAG methods.
Why it matters
This paper matters because it addresses a critical gap in evaluating abstract question answering, which is crucial for advanced RAG systems. ASTRA-QA provides a more robust and scalable way to assess model performance, especially regarding hallucination and information synthesis. This will drive progress in developing more reliable and accurate document-based QA models.
Original Abstract
Document-based question answering (QA) increasingly includes abstract questions that require synthesizing scattered information from long documents or across multiple documents into coherent answers. However, this setting is still poorly supported by existing benchmarks and evaluation methods, which often lack stable abstract references or rely on coarse similarity metrics and unstable head-to-head comparisons. To alleviate this issue, we introduce ASTRA-QA, a benchmark for AbSTRAct Question Answering over documents. ASTRA-QA contains 869 QA instances over academic papers and news documents, covering five abstract question types and three controlled retrieval scopes. Each instance is equipped with explicit evaluation annotations, including answer topic sets, curated unsupported topics, and aligned evidence. Building on these annotations, ASTRA-QA assesses whether answers cover required key points and avoid unsupported content by directly scoring topic coverage and curated unsupported content, enabling scalable evaluation without exhaustive head-to-head comparisons. Experiments with representative Retrieval-Augmented Generation (RAG) methods spanning vanilla, graph-based, and hierarchical retrieval settings show that ASTRA-QA provides reference-grounded diagnostics for coverage, hallucination, and retrieval-scope robustness. Our dataset and code are available at https://xinyangsally.github.io/astra-benchmark.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.