ArXiv TLDR
← All categories

Information Retrieval

Papers on search engines, recommendation systems, and information extraction.

cs.IR · 379 papers

Bridging Behavior and Semantics for Time-aware Cross-Domain Sequential Recommendation

BST-CDSR improves cross-domain sequential recommendation by modeling time-aware behavioral and semantic preferences using ODEs and LLMs.

2605.02369May 4, 2026Zhida Qin, Zemu Liu, Haoyan Fu +4

Enhancing Judgment Document Generation via Agentic Legal Information Collection and Rubric-Guided Optimization

Judge-R1 enhances LLM-based judgment document generation through agentic legal information collection and rubric-guided optimization, improving accuracy.

2605.02011May 3, 2026Weihang Su, Xuanyi Chen, Yueyue Wu +2

CyberAId: AI-Driven Cybersecurity for Financial Service Providers

CyberAId proposes a hybrid multi-agent AI system for financial cybersecurity, integrating LLMs with SIEM/XDR to enhance reasoning and regulatory compliance.

2605.01892May 3, 2026George Fatouros, Georgios Makridis, John Soldatos +18

FEDIN: Frequency-Enhanced Deep Interest Network for Click-Through Rate Prediction

FEDIN uses frequency-domain analysis with target-aware filtering to improve click-through rate prediction by capturing periodic user interests.

2605.01726May 3, 2026Zenan Dai, Jinpeng Wang, Junwei Pan +3

A Hybrid Retrieval and Reranking Framework for Evidence-Grounded Retrieval-Augmented Generation

A hybrid RAG framework combines retrieval, reranking, and claim-level evaluation to achieve 100% grounding accuracy in biomedical Q&A.

2605.01664May 3, 2026Fariba Afrin Irany, Sampson Akwafuo

Led to Mislead: Adversarial Content Injection for Attacks on Neural Ranking Models

CRAFT is an LLM-powered black-box framework for adversarial attacks on Neural Ranking Models, outperforming baselines and showing transferability.

2605.01591May 2, 2026Amin Bigdeli, Amir Khosrojerdi, Radin Hamidi Rad +3

KG-First, LLM-Fallback: A Hybrid Microservice for Grounded Skill Search and Explanation

SkillGraph-Service unifies complex competency frameworks into a KG, using a KG-first, LLM-fallback approach for efficient skill search and explanation.

2605.01582May 2, 2026Ngoc Luyen Le, Marie-Hélène Abel, Bertrand Laforge

Post-hoc Provider Fairness Adaptation via Hierarchical Exposure Alignment

PFA introduces a post-hoc fairness adapter for frozen recommenders, enabling flexible provider exposure fairness without expensive model retraining.

2605.01524May 2, 2026Jingzhi Li, Zhiyong Cheng, Richang Hong +1

Interactive Multi-Turn Retrieval for Health Videos

This paper introduces interactive multi-turn retrieval for health videos, proposing a new corpus and a two-stage framework for better search.

2605.01409May 2, 2026Chengzheng Wu, Ke Qiu, Baoming Zhang +3

The Pre-Training Study of Expanded-SPLADE Models on Web Document Titles

This paper studies pre-training Expanded-SPLADE models for neural IR, finding general corpora and higher learning rates improve retrieval effectiveness.

2605.01407May 2, 2026Hiun Kim, Tae Kwan Lee, Taeryun Won

Verbal-R3: Verbal Reranker as the Missing Bridge between Retrieval and Reasoning

Verbal-R3 introduces a novel RAG framework using 'Verbal Annotations' and a Verbal Reranker to improve LLM reasoning and achieve SOTA on QA benchmarks.

2605.01399May 2, 2026Sangkwon Park, Donghun Kang, Jisoo Mok +1

Robust Multimodal Recommendation via Graph Retrieval-Enhanced Modality Completion

GRE-MC enhances multimodal recommendation by completing missing data using graph retrieval and a transformer for robust, context-aware feature reconstruction.

2605.00670May 1, 2026Yuan Li, Jun Hu, Jiaxin Jiang +2

A Replicability Study of XTR

This study replicates XTR, finding its training improves efficient retrieval engines like PLAID and WARP, despite no overall effectiveness gain over ColBERT.

2605.00646May 1, 2026Rohan Jha, Reno Kriz, Benjamin Van Durme

H-RAG at SemEval-2026 Task 8: Hierarchical Parent-Child Retrieval for Multi-Turn RAG Conversations

H-RAG introduces a hierarchical parent-child retrieval pipeline for multi-turn RAG conversations, improving both retrieval and generation.

2605.00631May 1, 2026Passant Elchafei, Hossam Emam, Mohamed Alansary +2

MUDY: Multi-Granular Dynamic Candidate Contextualization for Unsupervised Keyphrase Extraction

MUDY introduces a context-centric framework for unsupervised keyphrase extraction, outperforming state-of-the-art by capturing multi-granular contextual salience.

2605.00597May 1, 2026Hyeongu Kang, Susik Yoon

When More Reformulations Hurt: Avoiding Drift using Ranker Feedback

ReformIR is a budget-aware retrieval framework that uses a teacher reranker to adaptively select query reformulations and documents, improving recall while avoiding drift.

2605.00560May 1, 2026V Venktesh, Mandeep Rathee, Avishek Anand

Hierarchical Abstract Tree for Cross-Document Retrieval-Augmented Generation

Ψ-RAG introduces a hierarchical abstract tree and multi-granular agent for cross-document RAG, significantly outperforming prior methods on multi-hop QA.

2605.00529May 1, 2026Ziwen Zhao, Menglin Yang

LLM-Oriented Information Retrieval: A Denoising-First Perspective

This paper argues that denoising is the primary bottleneck in LLM-oriented information retrieval, proposing a framework and techniques.

2605.00505May 1, 2026Lu Dai, Liang Sun, Fanpu Cao +4

Time-Interval-Aware Disentangled Expert Modeling for Next-Basket Recommendation

TIDE is a novel next-basket recommendation model that disentangles user habits from exploration and incorporates time-interval awareness for improved predictions.

2605.00499May 1, 2026Zhiying Deng, Yuan Fu, Usman Farooq +3

FollowTable: A Benchmark for Instruction-Following Table Retrieval

FollowTable introduces a new benchmark and metric for Instruction-Following Table Retrieval (IFTR), revealing existing models struggle with fine-grained instructions.

2605.00400May 1, 2026Rihui Jin, Yuchen Lu, Ting Zhang +7
PreviousPage 6 of 19Next

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.