ArXiv TLDR

Learning to Act and Cooperate for Distributed Black-Box Consensus Optimization

🐦 Tweet
2605.00691

Zi-Bo Qin, Feng-Feng Wei, Tai-You Chen, Wei-Neng Chen

cs.MAcs.NE

TLDR

This paper introduces LACMAS, a trajectory-driven framework using LLMs to self-design agent actions and cooperation for distributed black-box consensus optimization.

Key contributions

  • Proposes LACMAS, a trajectory-driven framework for distributed black-box consensus optimization.
  • Utilizes LLMs to guide agent actions and cooperation patterns based on historical optimization trajectories.
  • Redesigns agent swarm dynamics with an adaptive internal mechanism for better exploration and convergence.
  • Introduces a phased cognitive scheduling strategy for resource-aware adaptation.

Why it matters

Existing distributed optimization methods struggle with complex, heterogeneous environments due to static rules. This work pioneers self-designing multi-agent systems using LLMs and historical data. It offers a practical path to significantly improve solution quality and efficiency in real-world distributed tasks.

Original Abstract

Distributed blackbox consensus optimization is a fundamental problem in multi-agent systems, where agents must improve a global objective using only local objective queries and limited neighbor communication. Existing methods largely rely on handcrafted update rules and static cooperation patterns, which often struggle to balance local adaptation, global coordination, and communication efficiency in heterogeneous nonconvex environments. In this paper, we take an initial step toward trajectory-driven self-design for distributed black-box consensus optimization. We first redesign the agent-level swarm dynamics with an adaptive internal mechanism tailored to decentralized consensus settings, improving the balance between exploration, convergence, and local escape. Built on top of this adaptive execution layer, we propose Learning to Act and Cooperate (LACMAS), a trajectorydriven framework in which large language models provide sparse highlevel guidance for shaping both agentinternal action behaviors and agentexternal cooperation patterns from historical optimization trajectories. We further introduce a phased cognitive scheduling strategy to activate different forms of adaptation in a resource-aware manner. Experiments on standard distributed black-box benchmarks and real-world distributed tasks show that LAC-MAS consistently improves solution quality, convergence efficiency, and communication efficiency over strong baselines, suggesting a practical route from handcrafted distributed coordination toward self-designing multi-agent optimization systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.