ARHN: Answer-Centric Relabeling of Hard Negatives with Open-Source LLMs for Dense Retrieval
Hyewon Choi, Jooyoung Choi, Hansol Jang, Hyun Kim, Chulmin Yun + 2 more
TLDR
ARHN uses open-source LLMs to relabel and filter hard negatives in dense retrieval training, providing cleaner supervision and improving effectiveness.
Key contributions
- Proposes ARHN, a two-stage framework using open-source LLMs to refine hard negative samples.
- Stage 1: LLM generates passage-grounded answer snippets or indicates non-support for query-passage pairs.
- Stage 2: LLM listwise ranks passages by answerability, relabeling relevant ones as additional positives.
- Filters ambiguous negatives by excluding answer-containing passages from the negative set.
Why it matters
Label noise from hard negatives degrades neural retrieval. ARHN provides cleaner, answer-centric supervision, improving model effectiveness. Its reliance on open-source LLMs makes it a cost-effective and scalable solution for large-scale training.
Original Abstract
Neural retrievers are often trained on large-scale triplet data comprising a query, a positive passage, and a set of hard negatives. In practice, hard-negative mining can introduce false negatives and other ambiguous negatives, including passages that are relevant or contain partial answers to the query. Such label noise yields inconsistent supervision and can degrade retrieval effectiveness. We propose ARHN (Answer-centric Relabeling of Hard Negatives), a two-stage framework that leverages open-source LLMs to refine hard negative samples using answer-centric relevance signals. In the first stage, for each query-passage pair, ARHN prompts the LLM to generate a passage-grounded answer snippet or to indicate that the passage does not support an answer. In the second stage, ARHN applies an LLM-based listwise ranking over the candidate set to order passages by direct answerability to the query. Passages ranked above the original positive are relabeled to additional positives. Among passages ranked below the positive, ARHN excludes any that contain an answer snippet from the negative set to avoid ambiguous supervision. We evaluated ARHN on the BEIR benchmark under three configurations: relabeling only, filtering only, and their combination. Across datasets, the combined strategy consistently improves over either step in isolation, indicating that jointly relabeling false negatives and filtering ambiguous negatives yields cleaner supervision for training neural retrieval models. By relying strictly on open-source models, ARHN establishes a cost-effective and scalable refinement pipeline suitable for large-scale training.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.