Quality-Aware Collaborative Multi-Positive Contrastive Learning for Sequential Recommendation
TLDR
QCMP-CL introduces quality-aware collaborative multi-positive contrastive learning for sequential recommendation, improving view diversity and consistency.
Key contributions
- Introduces a learnable collaborative sequence augmentation module for diverse, intent-consistent views.
- Generates two augmented views using complementary same-target and similar sequence collaborative contexts.
- Designs a quality-aware mechanism to adaptively weight views based on augmentation operation confidence.
- Outperforms state-of-the-art contrastive learning baselines on three real-world datasets.
Why it matters
Current contrastive learning in sequential recommendation suffers from poor view quality and diversity, causing semantic drift and false positives. This paper introduces a novel method that generates diverse, consistent views and adaptively weights them based on quality. This significantly enhances the effectiveness and robustness of sequential recommendation.
Original Abstract
The effectiveness of contrastive learning in sequential recommendation hinges on the construction of contrastive views, which ideally should be both semantically consistent and diverse. However, most existing CL-based methods rely on heuristic augmentations that are prone to removing crucial items or disrupting transition patterns, leading to semantic drift. While a few studies have explored learnable augmentations to improve view quality, they often suffer from limited diversity and still necessitate heuristic aids. Furthermore, the quality differences across views are rarely modeled explicitly and adaptively, aggravating the false-positive issue. To address these issues, we propose Quality-aware Collaborative Multi-Positive Contrastive Learning for sequential recommendation. First, we introduce a learnable collaborative sequence augmentation module that generates two augmented views under two complementary collaborative contexts, one based on same-target sequences and the other on similar sequences, thereby enhancing view diversity while preserving intent consistency.Second, we design a quality-aware mechanism, tightly integrated into the model representations, which estimates each view' s quality from the confidence of its augmentation operations and assigns adaptive weights to ensure that high-confidence views contribute more supervision while low-confidence ones contribute less.Extensive experiments on three real-world datasets demonstrate that QCMP-CL outperforms state-of-the-art CL-based sequential recommendation baselines.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.