ArXiv TLDR

Reinforcement Learning for LLM-based Multi-Agent Systems through Orchestration Traces

🐦 Tweet
2605.02801

Chenchen Zhang

cs.CL

TLDR

This paper surveys Reinforcement Learning for LLM-based multi-agent systems, analyzing orchestration traces, reward design, credit assignment, and learning decisions.

Key contributions

  • Analyzes RL for LLM multi-agent systems via "orchestration traces," temporal interaction graphs of agent events.
  • Identifies three technical axes: 8 reward design families, 8 credit/signal units, and 5 orchestration learning sub-decisions.
  • Connects academic RL methods to industrial evidence from Kimi Agent Swarm, OpenAI Codex, and Anthropic Claude Code.
  • Releases a curated artifact including an 84-entry paper pool and a JSON schema for orchestration traces.

Why it matters

This paper provides a structured framework for advancing RL in complex LLM multi-agent systems. It identifies key challenges in reward design, credit assignment, and orchestration learning, bridging academic research with industrial applications. The released artifact is a valuable resource for future work.

Original Abstract

As large language model (LLM) agents evolve from isolated tool users into coordinated teams, reinforcement learning (RL) must optimize not only individual actions but also how work is spawned, delegated, communicated, aggregated, and stopped. This paper studies RL for LLM-based multi-agent systems through orchestration traces: temporal interaction graphs whose events include sub-agent spawning, delegation, communication, tool use, return, aggregation, and stopping decisions. Using this lens, we identify three technical axes. First, reward design spans eight families, including orchestration rewards for parallelism speedup, split correctness, and aggregation quality. Second, reward and credit signals attach to eight credit- or signal-bearing units from token to team; explicit counterfactual message-level credit remains especially sparse in our curated pool. Third, orchestration learning decomposes into five sub-decisions: when to spawn, whom to delegate to, how to communicate, how to aggregate, and when to stop. In our curated pool as of May 4, 2026, we found no explicit RL training method for the stopping decision. We connect academic methods to public industrial evidence from Kimi Agent Swarm, OpenAI Codex, and Anthropic Claude Code. The resulting scale gap is a gap between publicly reported deployment envelopes and open academic evaluation regimes, not independent verification of industrial training traces. We release the artifact at https://github.com/xxzcc/awesome-llm-mas-rl, including an 84-entry tagged paper pool, a 32-record exclusion log, scripted corpus statistics, and a minimal JSON schema for replayable orchestration traces.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.