ArXiv TLDR

Recursive Multi-Agent Systems

🐦 Tweet
2604.25917

Xiyuan Yang, Jiaru Zou, Rui Pan, Ruizhong Qiu, Pan Lu + 7 more

cs.AIcs.CLcs.LG

TLDR

RecursiveMAS scales multi-agent collaboration by casting the system as a unified latent-space recursive computation, improving performance and efficiency.

Key contributions

  • Introduces RecursiveMAS, a framework for scaling multi-agent collaboration via latent-space recursion.
  • Connects heterogeneous agents using RecursiveLink for latent thought generation and state transfer.
  • Develops an inner-outer loop learning algorithm for whole-system co-optimization and credit assignment.
  • Achieves 8.3% accuracy improvement, 1.2-2.4x speedup, and 34-75% token reduction.

Why it matters

This paper introduces a novel way to scale multi-agent systems by applying recursion to collaboration itself. RecursiveMAS significantly boosts performance and efficiency across diverse tasks, offering a more effective and resource-friendly approach to complex AI problem-solving. It represents a new direction for deepening multi-agent reasoning.

Original Abstract

Recursive or looped language models have recently emerged as a new scaling axis by iteratively refining the same model computation over latent states to deepen reasoning. We extend such scaling principle from a single model to multi-agent systems, and ask: Can agent collaboration itself be scaled through recursion? To this end, we introduce RecursiveMAS, a recursive multi-agent framework that casts the entire system as a unified latent-space recursive computation. RecursiveMAS connects heterogeneous agents as a collaboration loop through the lightweight RecursiveLink module, enabling in-distribution latent thoughts generation and cross-agent latent state transfer. To optimize our framework, we develop an inner-outer loop learning algorithm for iterative whole-system co-optimization through shared gradient-based credit assignment across recursion rounds. Theoretical analyses of runtime complexity and learning dynamics establish that RecursiveMAS is more efficient than standard text-based MAS and maintains stable gradients during recursive training. Empirically, we instantiate RecursiveMAS under 4 representative agent collaboration patterns and evaluate across 9 benchmarks spanning mathematics, science, medicine, search, and code generation. In comparison with advanced single/multi-agent and recursive computation baselines, RecursiveMAS consistently delivers an average accuracy improvement of 8.3%, together with 1.2$\times$-2.4$\times$ end-to-end inference speedup, and 34.6%-75.6% token usage reduction. Code and Data are provided in https://recursivemas.github.io.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.