From Soliloquy to Agora: Memory-Enhanced LLM Agents with Decentralized Debate for Optimization Modeling
Jianghao Lin, Zi Ling, Chenyu Zhou, Tianyi Xu, Ruoqing Jiang + 2 more
TLDR
Agora-Opt is a new LLM agent framework for optimization modeling that uses decentralized debate and memory to achieve superior performance.
Key contributions
- Introduces Agora-Opt, an LLM agent framework for robust optimization modeling from natural language.
- Uses decentralized debate among multiple agent teams to produce and reconcile end-to-end optimization solutions.
- Incorporates a read-write memory bank for continuous, training-free improvement via stored artifacts and resolutions.
- Achieves state-of-the-art performance on public benchmarks, outperforming strong LLM and training-centric methods.
Why it matters
Optimization modeling is crucial but challenging for LLMs. Agora-Opt provides a reliable and extensible solution by combining collaborative cross-checking with reusable experience. This framework offers significant improvements in accuracy and robustness for real-world decision-making.
Original Abstract
Optimization modeling underpins real-world decision-making in logistics, manufacturing, energy, and public services, but reliably solving such problems from natural-language requirements remains challenging for current large language models (LLMs). In this paper, we propose \emph{Agora-Opt}, a modular agentic framework for optimization modeling that combines decentralized debate with a read-write memory bank. Agora-Opt allows multiple agent teams to independently produce end-to-end solutions and reconcile them through an outcome-grounded debate protocol, while memory stores solver-verified artifacts and past disagreement resolutions to support training-free improvement over time. This design is flexible across both backbones and methods: it reduces base-model lock-in, transfers across different LLM families, and can be layered onto existing pipelines with minimal coupling. Across public benchmarks, Agora-Opt achieves the strongest overall performance among all compared methods, outperforming strong zero-shot LLMs, training-centric approaches, and prior agentic baselines. Further analyses show robust gains across backbone choices and component variants, and demonstrate that decentralized debate offers a structural advantage over centralized selection by enabling agents to refine candidate solutions through interaction and even recover correct formulations when all initial candidates are flawed. These results suggest that reliable optimization modeling benefits from combining collaborative cross-checking with reusable experience, and position Agora-Opt as a practical and extensible foundation for trustworthy optimization modeling assistance. Our code and data are available at https://github.com/CHIANGEL/Agora-Opt.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.