ArXiv TLDR

BEAM: Bi-level Memory-adaptive Algorithmic Evolution for LLM-Powered Heuristic Design

🐦 Tweet
2604.12898

Chuyang Xiang, Yichen Wei, Jiale Ma, Handing Wang, Junchi Yan

cs.AImath.CO

TLDR

BEAM introduces a bi-level, memory-adaptive algorithmic evolution for LLM-powered heuristic design, significantly outperforming existing methods.

Key contributions

  • BEAM uses bi-level optimization for LLM-powered heuristic design, overcoming single-layer LHH limitations.
  • Employs GA for high-level algorithmic structures and MCTS for realizing function placeholders.
  • Features an Adaptive Memory module and Knowledge Augmentation Pipeline for complex code generation.
  • Outperforms existing LHHs, reducing CVRP optimality gap by 37.84% and beating SOTA MIS solver.

Why it matters

Existing LLM-based heuristic design struggles with complex, complete solvers. BEAM's novel bi-level approach significantly improves automatic heuristic design efficiency and effectiveness. This advancement enables more powerful and generalizable AI-designed optimization algorithms.

Original Abstract

Large Language Model-based Hyper Heuristic (LHH) has recently emerged as an efficient way for automatic heuristic design. However, most existing LHHs just perform well in optimizing a single function within a pre-defined solver. Their single-layer evolution makes them not effective enough to write a competent complete solver. While some variants incorporate hyperparameter tuning or attempt to generate complex code through iterative local modifications, they still lack a high-level algorithmic modeling, leading to limited exploration efficiency. To address this, we reformulate heuristic design as a Bi-level Optimization problem and propose \textbf{BEAM} (Bi-level Memory-adaptive Algorithmic Evolution). BEAM's exterior layer evolves high-level algorithmic structures with function placeholders through genetic algorithm (GA), while the interior layer realizes these placeholders via Monte Carlo Tree Search (MCTS). We further introduce an Adaptive Memory module to facilitate complex code generation. To support the evaluation for complex code generation, we point out the limitations of starting LHHs from scratch or from code templates and introduce a Knowledge Augmentation (KA) Pipeline. Experimental results on several optimization problems demonstrate that BEAM significantly outperforms existing LHHs, notably reducing the optimality gap by 37.84\% on aggregate in CVRP hybrid algorithm design. BEAM also designs a heuristic that outperforms SOTA Maximum Independent Set (MIS) solver KaMIS.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.