ArXiv TLDR

Cycle-Consistent Search: Question Reconstructability as a Proxy Reward for Search Agent Training

🐦 Tweet
2604.12967

Sohyun An, Shuibenyang Yuan, Hayeon Lee, Cho-Jui Hsieh, Alexander Min

cs.AI

TLDR

Cycle-Consistent Search (CCS) trains search agents without gold supervision by using question reconstructability from search trajectories as a proxy reward.

Key contributions

  • Proposes Cycle-Consistent Search (CCS), a gold-supervision-free framework for training search agents.
  • Leverages question reconstructability from search trajectories as a novel proxy reward for RL.
  • Mitigates information leakage using bottlenecks like NER masking for robust reward signals.
  • Achieves performance comparable to supervised baselines on question-answering benchmarks.

Why it matters

Training effective search agents with reinforcement learning often requires costly gold supervision. This paper introduces Cycle-Consistent Search (CCS), a novel framework that eliminates this dependency by using question reconstructability as an unsupervised reward. This scalable approach opens new avenues for deploying powerful search agents in data-scarce environments.

Original Abstract

Reinforcement Learning (RL) has shown strong potential for optimizing search agents in complex information retrieval tasks. However, existing approaches predominantly rely on gold supervision, such as ground-truth answers, which is difficult to scale. To address this limitation, we propose Cycle-Consistent Search (CCS), a gold-supervision-free framework for training search agents, inspired by cycle-consistency techniques from unsupervised machine translation and image-to-image translation. Our key hypothesis is that an optimal search trajectory, unlike insufficient or irrelevant ones, serves as a lossless encoding of the question's intent. Consequently, a high-quality trajectory should preserve the information required to accurately reconstruct the original question, thereby inducing a reward signal for policy optimization. However, naive cycle-consistency objectives are vulnerable to information leakage, as reconstruction may rely on superficial lexical cues rather than the underlying search process. To reduce this effect, we apply information bottlenecks, including exclusion of the final response and named entity recognition (NER) masking of search queries. These constraints force reconstruction to rely on retrieved observations together with the structural scaffold, ensuring that the resulting reward signal reflects informational adequacy rather than linguistic redundancy. Experiments on question-answering benchmarks show that CCS achieves performance comparable to supervised baselines while outperforming prior methods that do not rely on gold supervision. These results suggest that CCS provides a scalable training paradigm for training search agents in settings where gold supervision is unavailable.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.