ArXiv TLDR

Multi-Faceted Continual Knowledge Graph Embedding for Semantic-Aware Link Prediction

🐦 Tweet
2604.10947

Jing Qi, Yuxiang Wang, Zhiyuan Yu, Xiaoliang Xu, Yuanshi Zheng + 1 more

cs.IR

TLDR

MF-CKGE improves continual knowledge graph embedding by separating old and new knowledge and adaptively identifying relevant semantics for better link prediction.

Key contributions

  • Separates temporal old and new knowledge into distinct embedding spaces to prevent knowledge entanglement.
  • Uses semantic decoupling to reduce redundancy and improve space efficiency during offline learning.
  • Adaptively identifies semantically query-relevant entity embeddings to reduce noise in online inference.

Why it matters

Existing continual knowledge graph embedding methods struggle with evolving entity semantics, leading to inaccurate link predictions. MF-CKGE solves this by separating old and new knowledge and adaptively identifying relevant semantics, boosting prediction accuracy. This is crucial for robust, lifelong knowledge graph systems.

Original Abstract

Continual Knowledge Graph Embedding (CKGE) aims to continually learn embeddings for new knowledge, i.e., entities and relations, while retaining previously acquired knowledge. Most existing CKGE methods mitigate catastrophic forgetting via regularization or replaying old knowledge. They conflate new and old knowledge of an entity within the same embedding space to seek a balance between them. However, entities inherently exhibit multi-faceted semantics that evolve dynamically as their relational contexts change over time. A shared embedding fails to capture and distinguish these temporal semantic variations, degrading lifelong link prediction accuracy across snapshots. To address this, we propose a Multi-Faceted CKGE framework (MF-CKGE) for semantic-aware link prediction. During offline learning, MF-CKGE separates temporal old and new knowledge into distinct embedding spaces to prevent knowledge entanglement and employs semantic decoupling to reduce semantic redundancy, thereby improving space efficiency. During online inference, MF-CKGE adaptively identifies semantically query-relevant entity embeddings by quantifying their semantic importance, reducing interference from query-irrelevant noise. Experiments on eight datasets show that MF-CKGE achieves an average (maximum) improvement of 1.7% (2.7%) and 1.4% (3.8%) in MRR and Hits@10, respectively, over the best baseline. Our source code and datasets are available at: https://anonymous.4open.science/r/MF-CKGE-04E5.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.