CodeMMR: Bridging Natural Language, Code, and Image for Unified Retrieval
Jiahui Geng, Qing Li, Fengyu Cai, Fakhri Karray
TLDR
CodeMMR is a unified model for multimodal code retrieval, bridging natural language, code, and images to improve code search and RAG.
Key contributions
- Introduces MMCoIR, the first comprehensive benchmark for multimodal code information retrieval.
- Proposes CodeMMR, a unified model embedding natural language, code, and images for retrieval.
- CodeMMR outperforms baselines by 10 points on nDCG@10 across modalities and languages.
- Integrates CodeMMR into RAG to enhance code generation fidelity and visual grounding.
Why it matters
Existing code retrieval overlooks visual aspects; this paper bridges that gap. It introduces MMCoIR, a new benchmark, and CodeMMR, a unified model that significantly improves multimodal code search and RAG. This advances intelligent programming systems by enabling more comprehensive code understanding.
Original Abstract
Code search, framed as information retrieval (IR), underpins modern software engineering and increasingly powers retrieval-augmented generation (RAG), improving code discovery, reuse, and the reliability of LLM-based coding. Yet existing code IR models remain largely text-centric and often overlook the visual and structural aspects inherent in programming artifacts such as web interfaces, data visualizations, SVGs, schematic diagrams, and UML. To bridge this gap, we introduce MMCoIR, the first comprehensive benchmark for evaluating multimodal code IR across five visual domains, eight programming languages, eleven libraries, and show the challenge of the task through extensive evaluation. Therefore, we then propose CodeMMR, a unified retrieval model that jointly embeds natural language, code, and images into a shared semantic space through instruction-based multimodal alignment. CodeMMR achieves strong generalization across modalities and languages, outperforming competitive baselines (e.g., UniIR, GME, VLM2Vec) by an average of 10 points on nDCG@10. Moreover, integrating CodeMMR into RAG enhances code generation fidelity and visual grounding on unseen code generation tasks, underscoring the potential of multimodal retrieval as a core enabler for next-generation intelligent programming systems. Datasets are available at HuggingFace.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.