ArXiv TLDR

DCGL: Dual-Channel Graph Learning with Large Language Models for Knowledge-Aware Recommendation

🐦 Tweet
2605.07314

Xinchi Zou, Tongzhenzhi Su, Jianjun Li, Yuan Fu, Chang Liu + 2 more

cs.IRcs.AI

TLDR

DCGL uses dual-channel graph learning with LLMs for knowledge-aware recommendation, improving performance by decoupling semantics and behavior.

Key contributions

  • Introduces a dual-channel architecture to decouple semantic information from user behavioral patterns.
  • Employs multi-level contrastive learning for KG noise robustness and semantic gap bridging between channels.
  • Utilizes a dynamic fusion mechanism to adaptively balance semantic generalization and behavioral specificity.

Why it matters

This paper addresses critical limitations in KG-LLM recommendation by preventing signal interference and adapting to interaction frequency. DCGL significantly improves performance, especially in sparse data scenarios, making recommendations more robust and precise for all users.

Original Abstract

Knowledge Graphs (KGs) have proven highly effective for recommendation systems by capturing latent item relationships, while recent integration of Large Language Models (LLMs) has further enhanced semantic understanding and addressed knowledge sparsity issues. Nevertheless, current KG-and-LLM-based methods still face three main limitations: 1) inadequate modeling of implicit semantic relationships beyond explicit KG links; 2) suboptimal single-channel fusion of ID and LLM embeddings, which often leads to signal interference and blurred representations; and 3) insufficient consideration of user-item interaction frequency variations in recommendation strategies. To address these challenges, we propose the Dual-Channel Graph Learning (DCGL) framework, featuring three key innovations: 1) a dual-channel architecture that structurally decouples rich semantic information from user behavioral patterns, preventing early interference; 2) a multi-level contrastive learning mechanism that enhances robustness against KG noise through intra-view contrasts and bridges semantic gaps between channels via inter-view alignment; and 3) a dynamic fusion mechanism that adaptively balances semantic generalization and behavioral specificity based on interaction frequency, resolving the cascading limitation. Extensive experiments on four real-world datasets show that DCGL consistently outperforms state-of-the-art methods, yielding substantial improvements in sparse scenarios while maintaining precision for active users. Our code is available at https://github.com/XinchiZou/DCGL.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.