Context Convergence Improves Answering Inferential Questions
Jamshid Mozafari, Bhawna Piryani, Adam Jatowt
TLDR
This paper shows that constructing passages with high "context convergence" significantly improves LLM accuracy on inferential question answering tasks.
Key contributions
- Investigates LLM performance on inferential QA based on passage structure and quality.
- Proposes "convergence" as a metric to select sentences for passages, improving inferential reasoning.
- Higher convergence passages significantly boost LLM accuracy compared to cosine similarity selection.
- Ordering sentences by descending convergence further enhances LLM performance.
Why it matters
This research offers a practical method, "context convergence," to improve how LLMs handle complex inferential questions. It provides a valuable signal for constructing more effective passages, enhancing LLM reasoning capabilities beyond simple retrieval.
Original Abstract
While Large Language Models (LLMs) are widely used in open-domain Question Answering (QA), their ability to handle inferential questions-where answers must be derived rather than directly retrieved-remains still underexplored. This study investigates how the structure and quality of passages influence LLM performance on such questions. We focus on convergence, a measure of how effectively sentences (hints) eliminate incorrect answers, as a criterion for constructing passages. Using subsets of the TriviaHG dataset, we form passages by combining sentences with varying convergence levels and evaluate six LLMs of different sizes and architectures. Our results show that passages built from higher convergence sentences lead to substantially better answer accuracy than those selected by cosine similarity, indicating that convergence captures meaningful relevance for inferential reasoning. Additionally, ordering sentences by descending convergence slightly improves performance, suggesting that LLMs tend to prioritize earlier, information-rich cues. These findings highlight convergence as a practical signal for guiding passage construction and analyzing inferential reasoning behavior in LLMs.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.