ArXiv TLDR

AAC: Admissible-by-Architecture Differentiable Landmark Compression for ALT

🐦 Tweet
2604.20744

An T. Le, Vien Ngo

cs.AIcs.LGcs.RO

TLDR

AAC introduces an admissible-by-architecture differentiable landmark compression module for A* shortest-path heuristics, improving performance.

Key contributions

  • Introduces AAC, a differentiable landmark-selection module for ALT shortest-path heuristics.
  • Guarantees admissibility by construction through a row-stochastic mixture of triangle-inequality bounds.
  • Achieves near-optimal coverage, performing within 0.9-3.9% of the theoretical ceiling.
  • Outperforms FPS-ALT by 1.2-1.5x in speed on road networks at matched memory.

Why it matters

AAC is the first differentiable landmark compression for classical heuristic search that guarantees admissibility. It integrates neural encoders with traditional shortest-path tools, offering faster and more accurate pathfinding and bridging deep learning with established search algorithms.

Original Abstract

We introduce \textbf{AAC} (Architecturally Admissible Compressor), a differentiable landmark-selection module for ALT (A*, Landmarks, and Triangle inequality) shortest-path heuristics whose outputs are admissible by construction: each forward pass is a row-stochastic mixture of triangle-inequality lower bounds, so the heuristic is admissible for \emph{every} parameter setting without requiring convergence, calibration, or projection. At deployment, the module reduces to classical ALT on a learned subset, composing end-to-end with neural encoders while preserving the classical toolchain. The construction is the first differentiable instance of the compress-while-preserving-admissibility tradition in classical heuristic search. Under a matched per-vertex memory protocol, we establish that ALT with farthest-point-sampling landmarks (FPS-ALT) has provably near-optimal coverage on metric graphs, leaving at most a few percentage points of headroom for \emph{any} selector. AAC operates near this ceiling: the gap is $0.9$--$3.9$ percentage points on 9 road networks and ${\leq}1.3$ percentage points on synthetic graphs, with zero admissibility violations across $1{,}500+$ queries and all logged runs. At matched memory, AAC is also $1.2$--$1.5{\times}$ faster than FPS-ALT at the median query on DIMACS road networks, amortizing its offline cost within $170$--$1{,}924$ queries. A controlled ablation isolates the binding constraint: training-objective drift under default initialization, not architectural capacity; identity-on-first-$m$ initialization closes the expansion-count gap entirely. We release the module, a reusable matched-memory benchmarking protocol with paired two-one-sided-test (TOST) equivalence and pre-registration, and a reference compressed-differential-heuristics baseline.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.