A Reproducibility Study of LLM-Based Query Reformulation
Amin Bigdeli, Radin Hamidi Rad, Hai Son Le, Mert Incesu, Negar Arabzadeh + 2 more
TLDR
This study systematically investigates the reproducibility of LLM-based query reformulation, revealing gains depend on retrieval paradigm.
Key contributions
- Systematically studies 10 LLM query reformulation methods for reproducibility.
- Reformulation gains are strongly conditioned on the retrieval paradigm.
- Improvements from lexical retrieval don't consistently transfer to neural retrievers.
- Larger LLMs do not uniformly improve downstream performance.
Why it matters
This paper clarifies the stability and limits of reported gains in LLM-based query reformulation by providing a unified experimental framework. It addresses the challenge of heterogeneous prior work, offering crucial insights into when and why these methods succeed. The release of QueryGym further enables transparent replication and future research.
Original Abstract
Large Language Models (LLMs) are now widely used for query reformulation and expansion in Information Retrieval, with many studies reporting substantial effectiveness gains. However, these results are typically obtained under heterogeneous experimental conditions, making it difficult to assess which findings are reproducible and which depend on specific implementation choices. In this work, we present a systematic reproducibility and comparative study of ten representative LLM-based query reformulation methods under a unified and strictly controlled experimental framework. We evaluate methods across two architectural LLM families at two parameter scales, three retrieval paradigms (lexical, learned sparse, and dense), and nine benchmark datasets spanning TREC Deep Learning and BEIR. Our results show that reformulation gains are strongly conditioned on the retrieval paradigm, that improvements observed under lexical retrieval do not consistently transfer to neural retrievers, and that larger LLMs do not uniformly yield better downstream performance. These findings clarify the stability and limits of reported gains in prior work. To enable transparent replication and ongoing comparison, we release all prompts, configurations, evaluation scripts, and run files through QueryGym, an open-source reformulation toolkit with a public leaderboard.\footnote{https://leaderboard.querygym.com}
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.