ArXiv TLDR

RAG over Thinking Traces Can Improve Reasoning Tasks

🐦 Tweet
2605.03344

Negar Arabzadeh, Wenjie Ma, Sewon Min, Matei Zaharia

cs.IRcs.AIcs.CL

TLDR

This paper shows that using "thinking traces" as a retrieval corpus significantly enhances RAG performance on complex reasoning tasks like math and code.

Key contributions

  • Challenges the assumption that RAG offers limited benefit for reasoning tasks by identifying corpus choice as the key.
  • Proposes using "thinking traces" (intermediate problem-solving steps) as the retrieval corpus for RAG.
  • Introduces T3, an offline method to transform thinking traces into structured, retrieval-friendly representations.
  • Demonstrates significant performance improvements on reasoning benchmarks (AIME, LiveCodeBench) with low inference cost.

Why it matters

This paper redefines RAG's role in reasoning, showing that using "thinking traces" as a corpus unlocks significant performance gains. It offers a practical, low-cost method to enhance LLM reasoning, suggesting new directions for improving complex problem-solving capabilities.

Original Abstract

Retrieval-augmented generation (RAG) has proven effective for knowledge-intensive tasks, but is widely believed to offer limited benefit for reasoning-intensive problems such as math and code generation. We challenge this assumption by showing that the limitation lies not in RAG itself, but in the choice of corpus. Instead of retrieving documents, we propose retrieving thinking traces, i.e., intermediate thinking trajectories generated during problem solving attempts. We show that thinking traces are already a strong retrieval source, and further introduce T3, an offline method that transforms them into structured, retrieval-friendly representations, to improve usability. Using these traces as a corpus, a simple retrieve-then-generate pipeline consistently improves reasoning performance across strong models and benchmarks such as AIME 2025--2026, LiveCodeBench, and GPQA-Diamond, outperforming both non-RAG baselines and retrieval over standard web corpora. For instance, on AIME, RAG with traces generated by Gemini-2-thinking achieves relative gains of +56.3%, +8.6%, and +7.6% for Gemini-2.5-Flash, GPT-OSS-120B, and GPT-5, respectively, even though these are more recent models. Interestingly, RAG on T3 also incurs little or no extra inference cost, and can even reduce inference cost by up to $15%$. Overall, our results suggest that thinking traces are an effective retrieval corpus for reasoning tasks, and transforming them into structured, compact, or diagnostic representations unlocks even stronger gains. Code available at https://github.com/Narabzad/t3.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.