A Closed-Form Adaptive-Landmark Kernel for Certified Point-Cloud and Graph Classification
Sushovan Majhi, Atish Mitra, Žiga Virk, Pramita Bagchi
TLDR
PALACE introduces a data-adaptive kernel for point-cloud and graph classification with four closed-form guarantees and strong empirical performance.
Key contributions
- Introduces PALACE, a data-adaptive kernel for point-cloud and graph classification with minimal hyperparameter tuning.
- Provides four closed-form theoretical guarantees, including a structural distortion bound and a classification rate.
- Derives optimal weights and landmark positions from training labels, avoiding gradient-based training.
- Offers a per-prediction certificate and achieves state-of-the-art empirical results on several benchmarks.
Why it matters
This paper introduces PALACE, a novel adaptive-landmark kernel that significantly advances certified point-cloud and graph classification. Its closed-form guarantees and data-adaptive nature provide robust theoretical backing and strong empirical performance, outperforming many existing methods. The per-prediction certificates enhance trustworthiness, making it valuable for critical applications.
Original Abstract
We introduce PALACE (Persistence Adaptive-Landmark Analytic Classification Engine), the data-adaptive companion to PLACE, paying a small cross-validation tier on three knobs (budget, radii, bandwidth; $\leq 5$ choices each). A cover-theoretic core (Lebesgue-number criterion on the landmark cover) yields four closed-form guarantees. (i) A structural lower distortion bound $λ(τ;ν)$ on $\mathcal{D}_n$ under cross-diagram non-interference, with a $(D/L)^2$ budget reduction over the uniform grid when diagrams concentrate. (ii) Equal weights $w_k = K^{-1/2}$ maximizing $λ$, and farthest-point-sampling positions $2$-approximating the optimal $k$-center covering radius; both derived from training labels alone, no gradient training. (iii) A kernel-RKHS classification rate $O((k-1)\sqrt{K}/(γ\sqrt{m_{\min}}))$ with binary necessity threshold $m = Ω(\sqrt K/γ)$ from a matching Le Cam lower bound, and a closed-form filtration-selection rule. The kernel-Mahalanobis margin $\hatρ_{\mathrm{Mah}}$ is the strongest closed-form ranker across the chemical-graph pool (mean Spearman $ρ\approx +0.60$); the isotropic surrogate $\hatγ/\sqrt{K}$ admits a selection-consistency rate, and $\widehatλ$ from (i) provides an independent data-level signal (positive on COX2 and PTC). (iv) A per-prediction certificate, in non-asymptotic Pinelis and asymptotic Gaussian forms, with no calibration split. Empirically, PALACE is the strongest closed-form diagram-based method on Orbit5k ($91.3 \pm 1.0\%$, matching Persformer), leads every diagram-based competitor on COX2 and MUTAG, and is competitive on DHFR (within 1 pp of ECP). At $8\times$ domain inflation, adaptive placement maintains $94\%$ while the uniform grid collapses to chance ($25\%$ on 4-class data).
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.