Intent Propagation Contrastive Collaborative Filtering
Haojie Li, Junwei Du, Guanfeng Liu, Feng Jiang, Yan Wang + 1 more
TLDR
IPCCF improves collaborative filtering by using a double helix message propagation and contrastive learning for better intent disentanglement.
Key contributions
- Employs a double helix message propagation to extract deep semantic node information.
- Integrates comprehensive graph structure into disentanglement via intent message propagation.
- Uses contrastive learning for direct supervision, reducing bias and overfitting in disentanglement.
- Achieves superior recommendation performance on real-world datasets.
Why it matters
Existing disentanglement methods in collaborative filtering struggle with limited structural understanding and lack direct supervision, leading to inaccuracies and overfitting. IPCCF addresses these by integrating comprehensive graph structure and direct supervision through contrastive learning. This significantly enhances recommendation performance and model robustness.
Original Abstract
Disentanglement techniques used in collaborative filtering uncover interaction intents between nodes, improving the interpretability of node representations and enhancing recommendation performance. However, existing disentanglement methods still face two problems. First, they focus on local structural features derived from direct node interactions and overlook the comprehensive graph structure, which limits disentanglement accuracy. Second, the disentanglement process depends on backpropagation signals derived from recommendation tasks and lacks direct supervision, which may lead to biases and overfitting. To address these issues, we propose the Intent Propagation Contrastive Collaborative Filtering (IPCCF) algorithm. Specifically, we design a double helix message propagation framework to more effectively extract the deep semantic information of nodes, thereby improving the model's understanding of interactions between nodes. We also develop an intent message propagation method that incorporates graph structure information into the disentanglement process, thereby expanding the consideration scope of disentanglement. In addition, contrastive learning techniques are employed to align node representations derived from structure and intents, providing direct supervision for the disentanglement process, mitigating biases, and enhancing the model's robustness to overfitting. Experiments on three real data graphs illustrate the superiority of the proposed approach.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.