Thought-Retriever: Don't Just Retrieve Raw Data, Retrieve Thoughts for Memory-Augmented Agentic Systems
Tao Feng, Pengrui Han, Guanyu Lin, Ge Liu, Jiaxuan You
TLDR
Thought-Retriever enables LLMs to overcome context limits by retrieving and organizing past intermediate responses ('thoughts') for self-evolving long-term memory.
Key contributions
- Introduces Thought-Retriever, an algorithm for LLMs to retrieve past intermediate responses ('thoughts').
- Develops a self-evolving long-term memory for LLM agents by organizing and filtering these thoughts.
- Proposes AcademicEval, a novel benchmark for evaluating LLMs on ultra-long academic paper contexts.
- Achieves significant performance gains (7.6% F1, 16% win rate) over SOTA on various datasets.
Why it matters
This paper is crucial as it tackles the fundamental limitation of LLMs in effectively utilizing vast external knowledge due to context length. By introducing a self-evolving memory based on retrieving past 'thoughts,' it enables LLM agents to become more capable and adaptive over time. This paves the way for more robust and intelligent AI systems.
Original Abstract
Large language models (LLMs) have transformed AI research thanks to their powerful internal capabilities and knowledge. However, existing LLMs still fail to effectively incorporate the massive external knowledge when interacting with the world. Although retrieval-augmented LLMs are proposed to mitigate the issue, they are still fundamentally constrained by the context length of LLMs, as they can only retrieve top-K raw data chunks from the external knowledge base which often consists of millions of data chunks. Here we propose Thought-Retriever, a novel model-agnostic algorithm that helps LLMs generate output conditioned on arbitrarily long external data, without being constrained by the context length or number of retrieved data chunks. Our key insight is to let an LLM fully leverage its intermediate responses generated when solving past user queries (thoughts), filtering meaningless and redundant thoughts, organizing them in thought memory, and retrieving the relevant thoughts when addressing new queries. This effectively equips LLM-based agents with a self-evolving long-term memory that grows more capable through continuous interaction. Besides algorithmic innovation, we further meticulously prepare a novel benchmark, AcademicEval, which requires an LLM to faithfully leverage ultra-long context to answer queries based on real-world academic papers. Extensive experiments on AcademicEval and two other public datasets validate that Thought-Retriever remarkably outperforms state-of-the-art baselines, achieving an average increase of at least 7.6% in F1 score and 16% in win rate across various tasks. More importantly, we further demonstrate two exciting findings: (1) Thought-Retriever can indeed help LLM self-evolve after solving more user queries; (2) Thought-Retriever learns to leverage deeper thoughts to answer more abstract user queries.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.