ArXiv TLDR

GRAPHLCP: Structure-Aware Localized Conformal Prediction on Graphs

🐦 Tweet
2605.08074

Peyman Baghershahi, Fangxin Wang, Debmalya Mandal, Sourav Medya

cs.LG

TLDR

GRAPHLCP introduces a structure-aware localized conformal prediction framework for graphs, improving uncertainty quantification in GNNs.

Key contributions

  • Introduces GRAPHLCP, a localized conformal prediction framework that explicitly uses graph topology.
  • Incorporates inter-node dependencies into localization and weighting for more reliable predictions.
  • Uses feature-aware densification and Personalized PageRank to model structural proximity in graphs.
  • Guarantees marginal coverage and achieves efficient conditional coverage across various graph scenarios.

Why it matters

Applying conformal prediction to graphs is challenging due to their complex structure, leading to unreliable uncertainty estimates. GRAPHLCP offers a novel solution by explicitly leveraging graph topology, providing more accurate and efficient uncertainty quantification for GNNs. This advances reliable AI on graph-structured data.

Original Abstract

Conformal prediction (CP) provides a distribution-free approach to uncertainty quantification with finite-sample guarantees. However, applying CP to graph neural networks (GNNs) remains challenging as the combinatorial nature of graphs often leads to insufficiently certain predictions and indiscriminative embeddings. Existing methods primarily rely on embedding-space proximity for localization, which can be unreliable for graphs and yield inefficient prediction sets. We propose GRAPHLCP, a proximity-based localized CP framework that explicitly incorporates graph topology and inter-node dependencies into localization and weighting. Our approach introduces a feature-aware densification step to mitigate locality bias in sparse graphs, followed by a Personalized PageRank-based kernel computation to model structural proximity. This enables topology-dependent anchor sampling and calibration weighting that captures both local and long-range dependencies. Extensive experiments on several regression and classification datasets demonstrate that GRAPHLCP guarantees marginal coverage with finite samples while efficiently attaining favorable test conditional coverage across various conditioning scenarios.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.