COSEARCH: Joint Training of Reasoning and Document Ranking via Reinforcement Learning for Agentic Search
Hansi Zeng, Liam Collins, Bhuvesh Kumar, Neil Shah, Hamed Zamani
TLDR
CoSearch proposes a reinforcement learning framework that jointly trains reasoning agents and document rankers for agentic search, overcoming retrieval bottlenecks.
Key contributions
- Jointly trains reasoning agents and generative document rankers via Group Relative Policy Optimization (GRPO).
- Introduces a semantic grouping strategy for GRPO, clustering sub-queries by token-level similarity.
- Employs a composite reward combining ranking quality signals with trajectory-level outcome feedback.
Why it matters
Existing agentic search systems are bottlenecked by fixed retrieval components. CoSearch addresses this by demonstrating that jointly optimizing both reasoning and retrieval is feasible and highly effective. This work paves the way for more performant and adaptable future search agents.
Original Abstract
Agentic search -- the task of training agents that iteratively reason, issue queries, and synthesize retrieved information to answer complex questions -- has achieved remarkable progress through reinforcement learning (RL). However, existing approaches such as Search-R1, treat the retrieval system as a fixed tool, optimizing only the reasoning agent while the retrieval component remains unchanged. A preliminary experiment reveals that the gap between an oracle and a fixed retrieval system reaches up to +26.8% relative F1 improvement across seven QA benchmarks, suggesting that the retrieval system is a key bottleneck in scaling agentic search performance. Motivated by this finding, we propose CoSearch, a framework that jointly trains a multi-step reasoning agent and a generative document ranking model via Group Relative Policy Optimization (GRPO). To enable effective GRPO training for the ranker -- whose inputs vary across reasoning trajectories -- we introduce a semantic grouping strategy that clusters sub-queries by token-level similarity, forming valid optimization groups without additional rollouts. We further design a composite reward combining ranking quality signals with trajectory-level outcome feedback, providing the ranker with both immediate and long-term learning signals. Experiments on seven single-hop and multi-hop QA benchmarks demonstrate consistent improvements over strong baselines, with ablation studies validating each design choice. Our results show that joint training of the reasoning agent and retrieval system is both feasible and strongly performant, pointing to a key ingredient for future search agents.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.