ArXiv TLDR

Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning

🐦 Tweet
2604.08980

Yi Luo, Xu Sun, Guangchun Luo, Aiguo Chen

cs.LGcs.AI

TLDR

Neighbourhood Transformer (NT) introduces switchable attention within local graph neighbourhoods to overcome GNN homophily limitations, achieving SOTA performance.

Key contributions

  • Proposes Neighbourhood Transformer (NT) for monophily-aware graph learning.
  • Applies self-attention within local neighbourhoods, addressing GNN homophily limits.
  • Optimized with switchable attentions, reducing space/time by >95%/92%.
  • Outperforms SOTA methods on 10 diverse real-world datasets for node classification.

Why it matters

This paper tackles the fundamental homophily limitation of GNNs with a novel, monophily-aware architecture. Its efficiency optimizations make it practical for large graphs, significantly expanding GNN applicability. Demonstrating superior performance on diverse datasets, it's a crucial step for robust graph learning.

Original Abstract

Graph neural networks (GNNs) have been widely adopted in engineering applications such as social network analysis, chemical research and computer vision. However, their efficacy is severely compromised by the inherent homophily assumption, which fails to hold for heterophilic graphs where dissimilar nodes are frequently connected. To address this fundamental limitation in graph learning, we first draw inspiration from the recently discovered monophily property of real-world graphs, and propose Neighbourhood Transformers (NT), a novel paradigm that applies self-attention within every local neighbourhood instead of aggregating messages to the central node as in conventional message-passing GNNs. This design makes NT inherently monophily-aware and theoretically guarantees its expressiveness is no weaker than traditional message-passing frameworks. For practical engineering deployment, we further develop a neighbourhood partitioning strategy equipped with switchable attentions, which reduces the space consumption of NT by over 95% and time consumption by up to 92.67%, significantly expanding its applicability to larger graphs. Extensive experiments on 10 real-world datasets (5 heterophilic and 5 homophilic graphs) show that NT outperforms all current state-of-the-art methods on node classification tasks, demonstrating its superior performance and cross-domain adaptability. The full implementation code of this work is publicly available at https://github.com/cf020031308/MoNT to facilitate reproducibility and industrial adoption.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.