ArXiv TLDR

AEL: Agent Evolving Learning for Open-Ended Environments

🐦 Tweet
2604.21725

Wujiang Xu, Jiaojiao Han, Minghao Guo, Kai Mei, Xi Zhu + 2 more

cs.CLcs.AIcs.CE

TLDR

AEL is a two-timescale framework enabling LLM agents to learn from past experience in open-ended environments, outperforming existing methods.

Key contributions

  • Introduces AEL, a two-timescale framework for LLM agents to learn from experience.
  • Fast timescale uses Thompson Sampling for memory retrieval; slow timescale uses LLM reflection.
  • Achieves a 2.13 Sharpe ratio on a portfolio benchmark, outperforming five self-improving methods.
  • Ablation reveals memory and reflection are crucial; added complexity degrades agent self-improvement.

Why it matters

This paper enables LLM agents to learn from experience in open-ended environments using AEL, a two-timescale framework. It outperforms complex methods by focusing on effective self-diagnosis of memory, proving simplicity over architectural complexity is key for agent evolution.

Original Abstract

LLM agents increasingly operate in open-ended environments spanning hundreds of sequential episodes, yet they remain largely stateless: each task is solved from scratch without converting past experience into better future behavior. The central obstacle is not \emph{what} to remember but \emph{how to use} what has been remembered, including which retrieval policy to apply, how to interpret prior outcomes, and when the current strategy itself must change. We introduce \emph{Agent Evolving Learning} (\ael{}), a two-timescale framework that addresses this obstacle. At the fast timescale, a Thompson Sampling bandit learns which memory retrieval policy to apply at each episode; at the slow timescale, LLM-driven reflection diagnoses failure patterns and injects causal insights into the agent's decision prompt, giving it an interpretive frame for the evidence it retrieves. On a sequential portfolio benchmark (10 sector-diverse tickers, 208 episodes, 5 random seeds), \ael{} achieves a Sharpe ratio of 2.13$\pm$0.47, outperforming five published self-improving methods and all non-LLM baselines while maintaining the lowest variance among all LLM-based approaches. A nine-variant ablation reveals a ``less is more'' pattern: memory and reflection together produce a 58\% cumulative improvement over the stateless baseline, yet every additional mechanism we test (planner evolution, per-tool selection, cold-start initialization, skill extraction, and three credit assignment methods) \emph{degrades} performance. This demonstrates that the bottleneck in agent self-improvement is \emph{self-diagnosing how to use} experience rather than adding architectural complexity. Code and data: https://github.com/WujiangXu/AEL.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.