ArXiv TLDR

F\textsuperscript{2}LP-AP: Fast \& Flexible Label Propagation with Adaptive Propagation Kernel

🐦 Tweet
2604.20736

Yutong Shen, Ruizhe Xia, Jingyi Liu, Yinqi Liu

cs.LG

TLDR

F2LP-AP is a fast, training-free label propagation method that adapts to local graph topology, effectively handling both homophilous and heterophilous graphs.

Key contributions

  • Introduces F2LP-AP, a training-free and computationally efficient label propagation framework.
  • Adapts to local graph topology, effectively modeling both homophilous and heterophilous graphs.
  • Constructs robust class prototypes using the geometric median for improved accuracy.
  • Dynamically adjusts propagation parameters based on the Local Clustering Coefficient (LCC).

Why it matters

Traditional GNNs are computationally expensive and struggle with heterophilous graphs. F2LP-AP provides a training-free, efficient alternative that adapts to diverse graph structures. This makes it a practical solution for semi-supervised node classification, offering competitive accuracy with superior speed.

Original Abstract

Semi-supervised node classification is a foundational task in graph machine learning, yet state-of-the-art Graph Neural Networks (GNNs) are hindered by significant computational overhead and reliance on strong homophily assumptions. Traditional GNNs require expensive iterative training and multi-layer message passing, while existing training-free methods, such as Label Propagation, lack adaptability to heterophilo\-us graph structures. This paper presents \textbf{F$^2$LP-AP} (Fast and Flexible Label Propagation with Adaptive Propagation Kernel), a training-free, computationally efficient framework that adapts to local graph topology. Our method constructs robust class prototypes via the geometric median and dynamically adjusts propagation parameters based on the Local Clustering Coefficient (LCC), enabling effective modeling of both homophilous and heterophilous graphs without gradient-based training. Extensive experiments across diverse benchmark datasets demonstrate that \textbf{F$^2$LP-AP} achieves competitive or superior accuracy compared to trained GNNs, while significantly outperforming existing baselines in computational efficiency.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.