STK-Adapter: Incorporating Evolving Graph and Event Chain for Temporal Knowledge Graph Extrapolation
Shuyuan Zhao, Wei Chen, Weijie Zhang, Xinrui Hou, Junfeng Shen + 4 more
TLDR
STK-Adapter integrates evolving graphs and event chains into LLMs for temporal knowledge graph extrapolation, addressing information loss and feature dilution.
Key contributions
- Proposes STK-Adapter for TKG extrapolation by flexibly integrating graph encoders and LLMs.
- Uses a Spatial-Temporal MoE to capture TKG spatial structures and temporal patterns.
- Employs an Event-Aware MoE to model temporal semantic dependencies in event chains.
- Introduces a Cross-Modality Alignment MoE for deep TKG-guided cross-modality alignment.
Why it matters
This paper addresses critical challenges in temporal knowledge graph extrapolation by preventing information loss during LLM integration. The STK-Adapter significantly improves future event prediction, offering a robust solution for evolving knowledge graphs. Its strong generalization capabilities are valuable for real-world applications.
Original Abstract
Temporal Knowledge Graph (TKG) extrapolation aims to predict future events based on historical facts. Recent studies have attempted to enhance TKG extrapolation by integrating TKG's evolving structural representations and textual event chains into Large Language Models (LLMs). Yet, two main challenges limit these approaches: (1) The loss of essential spatial-temporal information due to shallow alignment between TKG's graph evolving structural representation and the LLM's semantic space, and (2) the progressive dilution of the TKG's evolving structural features during LLM fine-tuning. To address these challenges, we propose the Spatial-Temporal Knowledge Adapter (STK-Adapter), which flexibly integrates the evolving graph encoder and the LLM to facilitate TKG reasoning. In STK-Adapter, a Spatial-Temporal MoE is designed to capture spatial structures and temporal patterns inherent in TKGs. An Event-Aware MoE is employed to model intricate temporal semantics dependencies within event chains. In addition, a Cross-Modality Alignment MoE is proposed to facilitate deep cross-modality alignment by TKG-guided attention experts. Extensive experiments on benchmark datasets demonstrate that STK-Adapter significantly outperforms state-of-the-art methods and exhibits strong generalization capabilities in cross-dataset task. The code is available at https://github.com/Zhaoshuyuan0246/STK-Adapter.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.