MAB-DQA: Addressing Query Aspect Importance in Document Question Answering with Multi-Armed Bandits
Yixin Xiang, Yunshan Ma, Xiaoyu Du, Yibing Chen, Yanxin Zhang + 1 more
TLDR
MAB-DQA uses Multi-Armed Bandits to dynamically prioritize query aspects in multimodal DQA, improving retrieval and answer generation by focusing on high-value pages.
Key contributions
- MAB-DQA models varying importance of implicit query aspects in Document QA.
- Decomposes queries into aspect-aware subqueries, each retrieving a specific candidate page set.
- Uses Multi-Armed Bandits with preliminary reasoning rewards to estimate aspect utility.
- Dynamically reallocates retrieval budgets to high-value aspects, enhancing page selection.
Why it matters
Multimodal DQA often overlooks crucial information by retrieving only a few visually salient pages. MAB-DQA dynamically prioritizes query aspects and reallocates retrieval budgets, significantly enhancing document understanding.
Original Abstract
Document Question Answering (DQA) involves generating answers from a document based on a user's query, representing a key task in document understanding. This task requires interpreting visual layouts, which has prompted recent studies to adopt multimodal Retrieval-Augmented Generation (RAG) that processes page images for answer generation. However, in multimodal RAG, visual DQA struggles to utilize a large number of images effectively, as the retrieval stage often retains only a few candidate pages (e.g., Top-4), causing informative but less visually salient content to be overlooked in favor of common yet low-information pages. To address this issue, we propose a Multi-Armed Bandit-based DQA framework (MAB-DQA) to explicitly model the varying importance of multiple implicit aspects in a query. Specifically, MAB-DQA decomposes a query into aspect-aware subqueries and retrieves an aspect-specific candidate set for each. It treats each subquery as an arm and uses preliminary reasoning results from a small number of representative pages as reward signals to estimate aspect utility. Guided by an exploration-exploitation policy, MAB-DQA dynamically reallocates retrieval budgets toward high-value aspects. With the most informative pages and their correlations, MAB-DQA generates the expected results. On four benchmarks, MAB-DQA shows an average improvement of 5%-18% over the state-of-the-art method, consistently enhancing document understanding. Code at https://github.com/ElephantOH/MAB-DQA.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.