Empowering Heterogeneous Graph Foundation Models via Decoupled Relation Alignment
Ziyu Zheng, Yaming Yang, Zhe Wang, Ziyu Guan, Wei Zhao
TLDR
DRSA enhances heterogeneous graph foundation models by decoupling feature semantics from relation structures, preventing "Type Collapse" and "Relation Confusion."
Key contributions
- Proposes Decoupled relation Subspace Alignment (DRSA) for heterogeneous graph foundation models.
- Decouples feature semantics from relation structures to address "Type Collapse" and "Relation Confusion."
- Utilizes dual-relation subspace projection and feature-structure decoupled representation for alignment.
- Enhances cross-domain and few-shot knowledge transfer capabilities of existing GFMs.
Why it matters
Heterogeneous Graph Foundation Models face challenges with feature shifts and relation gaps, causing "Type Collapse." DRSA offers a novel, plug-and-play solution by decoupling feature semantics and relation structures. This significantly improves cross-domain and few-shot knowledge transfer, making GFMs more effective.
Original Abstract
While Graph Foundation Models (GFMs) have achieved remarkable success in homogeneous graphs, extending them to multi-domain heterogeneous graphs (MDHGs) remains a formidable challenge due to cross-type feature shifts and intra-domain relation gaps. Existing global feature alignment methods (PCA or SVD) enforce a shared feature space blindly, which distorts type-specific semantics and disrupts original topologies, inevitably leading to "Type Collapse" and "Relation Confusion". To address these fundamental limitations, we propose Decoupled relation Subspace Alignment (DRSA), a novel, plug-and-play relation-driven alignment framework. DRSA fundamentally shifts the paradigm by decoupling feature semantics from relation structures. Specifically, it introduces a dual-relation subspace projection mechanism to coordinate cross-type interactions within a shared low-rank relation subspace explicitly. Furthermore, a feature-structure decoupled representation is designed to decompose aligned features into a semantic projection component and a structural residual term, adaptively absorbing intra-domain variations. Optimized via a stable alternating minimization strategy based on Block Coordinate Descent, DRSA constructs a well-calibrated, structure-aware latent space. Extensive experiments on multiple real-world benchmark datasets demonstrate that DRSA can be seamlessly integrated as a universal preprocessing module, significantly and consistently enhancing the cross-domain and few-shot knowledge transfer capabilities of state-of-the-art GFMs. The code is available at: https://github.com/zhengziyu77/DSRA.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.