Recursive Agent Optimization
Apurva Gandhi, Satyaki Chakraborty, Xiangjun Wang, Aviral Kumar, Graham Neubig
TLDR
Recursive Agent Optimization (RAO) trains agents to recursively delegate sub-tasks, enabling them to scale and generalize more effectively.
Key contributions
- Introduces Recursive Agent Optimization (RAO) for training agents that recursively delegate sub-tasks.
- Enables agents to scale to longer contexts and generalize to much harder problems via divide-and-conquer.
- Teaches agents optimal strategies for when and how to delegate and communicate effectively.
- Achieves better training efficiency, scalability, generalization, and reduced wall-clock time.
Why it matters
This paper introduces a novel approach for training agents that can recursively break down and delegate tasks. This allows AI systems to tackle much more complex problems, scale beyond typical context window limits, and operate more efficiently than current single-agent methods.
Original Abstract
We introduce Recursive Agent Optimization (RAO), a reinforcement learning approach for training recursive agents: agents that can spawn and delegate sub-tasks to new instantiations of themselves recursively. Recursive agents implement an inference-time scaling algorithm that naturally allows agents to scale to longer contexts and generalize to more difficult problems via divide-and-conquer. RAO provides a method to train models to best take advantage of such recursive inference, teaching agents when and how to delegate and communicate. We find that recursive agents trained in this way enjoy better training efficiency, can scale to tasks that go beyond the model's context window, generalize to tasks much harder than the ones the agent was trained on, and can enjoy reduced wall-clock time compared to single-agent systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.