MEG-RAG: Quantifying Multi-modal Evidence Grounding for Evidence Selection in RAG
Xihang Wang, Zihan Wang, Chengkai Huang, Quan Z. Sheng, Lina Yao
TLDR
MEG-RAG introduces a semantic-aware metric and reranking framework to improve multimodal evidence grounding in RAG systems, enhancing generation accuracy.
Key contributions
- Proposes Multi-modal Evidence Grounding (MEG) to quantify semantic contribution of retrieved evidence.
- MEG uses Semantic Certainty Anchoring, focusing on high-IDF tokens for core semantic capture.
- Introduces MEG-RAG, a framework training a multimodal reranker for evidence-ground truth alignment.
- Improves accuracy and multimodal consistency in generated outputs by prioritizing high-value content.
Why it matters
Multimodal RAG systems often struggle with selecting truly relevant evidence, leading to hallucinations. This paper introduces a novel metric and framework, MEG-RAG, that semantically grounds evidence, significantly improving the accuracy and consistency of MLLM outputs. This advancement is crucial for building more reliable and trustworthy multimodal AI.
Original Abstract
Multimodal Retrieval-Augmented Generation (MRAG) addresses key limitations of Multimodal Large Language Models (MLLMs), such as hallucination and outdated knowledge. However, current MRAG systems struggle to distinguish whether retrieved multimodal data truly supports the semantic core of an answer or merely provides superficial relevance. Existing metrics often rely on heuristic position-based confidence, which fails to capture the informational density of multimodal entities. To address this, we propose Multi-modal Evidence Grounding (MEG), a semantic-aware metric that quantifies the contribution of retrieved evidence. Unlike standard confidence measures, MEG utilizes Semantic Certainty Anchoring, focusing on high-IDF information-bearing tokens that better capture the semantic core of the answer. Building on MEG, we introduce MEG-RAG, a framework that trains a multimodal reranker to align retrieved evidence with the semantic anchors of the ground truth. By prioritizing high-value content based on semantic grounding rather than token probability distributions, MEG-RAG improves the accuracy and multimodal consistency of generated outputs. Extensive experiments on the M$^2$RAG benchmark show that MEG-RAG consistently outperforms strong baselines and demonstrates robust generalization across different teacher models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.