ArXiv TLDR

Learning to Communicate: Toward End-to-End Optimization of Multi-Agent Language Systems

🐦 Tweet
2604.21794

Ye Yu, Heming Liu, Haibo Jin, Xiaopeng Yuan, Peng Kuang + 1 more

cs.AIcs.CLcs.MA

TLDR

DiffMAS optimizes multi-agent latent communication end-to-end, improving reasoning accuracy and stability across various complex tasks.

Key contributions

  • Proposes DiffMAS, a framework for end-to-end optimization of latent communication in multi-agent systems.
  • Enables agents to jointly learn information encoding and interpretation via parameter-efficient supervised training.
  • Significantly improves reasoning accuracy and decoding stability over single-agent and text-based systems.
  • Achieves state-of-the-art results on mathematical reasoning (AIME24: 26.7%) and scientific QA (GPQA: 20.2%).

Why it matters

DiffMAS offers a breakthrough in multi-agent systems by optimizing latent communication end-to-end, allowing agents to jointly learn how to share information. This approach significantly boosts reasoning accuracy and stability, paving the way for more intelligent and robust collaborative AI.

Original Abstract

Multi-agent systems built on large language models have shown strong performance on complex reasoning tasks, yet most work focuses on agent roles and orchestration while treating inter-agent communication as a fixed interface. Latent communication through internal representations such as key-value caches offers a promising alternative to text-based protocols, but existing approaches do not jointly optimize communication with multi-agent reasoning. Therefore we propose DiffMAS, a training framework that treats latent communication as a learnable component of multi-agent systems. DiffMAS performs parameter-efficient supervised training over multi-agent latent trajectories, enabling agents to jointly learn how information should be encoded and interpreted across interactions. Experiments on mathematical reasoning, scientific QA, code generation, and commonsense benchmarks show that DiffMAS consistently improves reasoning accuracy and decoding stability over single-agent inference, text-based multi-agent systems, and prior latent communication methods, achieving 26.7% on AIME24, 20.2% on GPQA-Diamond, and consistent gains across reasoning benchmarks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.