ArXiv TLDR

Graph-Grounded Optimization: Rao-Family Metaheuristics, Classical OR, and SLM-Driven Formulation over Knowledge Graphs

🐦 Tweet
2605.12204

Madhulatha Mandarapu, Sandeep Kunkunuru

cs.DBcs.NE

TLDR

Proposes graph-grounded optimization, sourcing problem variables and constraints from knowledge graphs, and evaluates it on diverse real-world problems.

Key contributions

  • Introduces graph-grounded optimization, sourcing problem variables and constraints directly from knowledge graphs.
  • Evaluates the approach on seven diverse real-world problems using large-scale knowledge graphs.
  • Compares Rao-family metaheuristics with Google OR-tools, noting OR-tools struggles with non-linear objectives.
  • Demonstrates graph-grounded formulations expose data quality issues that text-based methods would hide.

Why it matters

This paper introduces a novel optimization paradigm by directly leveraging knowledge graphs, improving problem formulation beyond text or tabular inputs. This structured approach defines complex problems, uncovers hidden data quality issues, and offers insights into solver performance.

Original Abstract

We propose graph-grounded optimization: a paradigm in which the decision variables, constraints, and objective coefficients of a real-world optimization problem are sourced from a property knowledge graph (KG) via Cypher queries, rather than supplied as free-form natural-language text or static tabular input. We motivate the paradigm by surveying recent LLM/SLM-driven optimization systems -- OptiMUS, Chain-of-Experts, LLMOPT, OPRO, FunSearch, Eureka -- none of which consume property graphs as the primary input modality. We instantiate the paradigm in the open-source samyama-graph database and evaluate seven real-world public-domain KG-backed problems spanning drug repurposing (245K-node biomedical KG), clinical-trial site selection (7.78M-node trial registry), Indian supply-chain rerouting (5.34M-node OSM road graph), healthcare equity allocation (WHO/GAVI/IHME KG), economic-environmental grid dispatch, antimicrobial-resistance stewardship (NCBI AMRFinderPlus, 10.4K resistance genes), and wildfire evacuation routing (OSM Paradise, CA). We compare a portfolio of Rao-family metaheuristics (BMWR, Jaya, SAMP-Jaya, EHR-Jaya, Rao-1) against Google OR-tools (CP-SAT and GLOP) reference solvers. We find that (i) no single Rao variant dominates: BMWR wins on discrete-with-tradeoff and high-dim-with-hard-constraint problems while Rao-1 wins on continuous low-/mid-dim problems, empirically supporting a portfolio approach; (ii) OR-tools dominates on small linear/MILP-friendly sub-problems but cannot encode the non-linear objectives that emerge in several of the real-world settings; (iii) graph-grounded formulations surface data-quality issues (missing properties, degenerate aggregates) that purely text-formulated optimizations would silently mask

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.