A Gated Hybrid Contrastive Collaborative Filtering Recommendation
Eduardo Ferreira da Silva, Mayki dos Santos Oliveira, Joel Machado Pires, Denis Dantas Boaventura, Maycon Maciel Peixoto + 4 more
TLDR
This paper introduces a Gated Hybrid Contrastive Collaborative Filtering framework that uses review text and contrastive learning to improve top-N recommendation ranking.
Key contributions
- Proposes Gated Hybrid Contrastive Collaborative Filtering for top-N recommendations.
- Integrates review semantics layer-wise using an adaptive gating mechanism.
- Employs contrastive learning to align semantic and collaborative representations.
- Optimizes ranking with a pairwise Bayesian personalized ranking objective.
Why it matters
Existing review-aware recommenders often prioritize rating prediction over ranking quality, limiting their effectiveness in top-N scenarios. This work addresses that gap by explicitly optimizing for ranking, leading to more relevant recommendations. It demonstrates the value of controlled semantic fusion for better user experience.
Original Abstract
Recommender systems increasingly incorporate textual reviews to enrich user and item representations. However, most review-aware models remain optimized for rating prediction rather than ranking quality. This misalignment limits their effectiveness in top-N recommendation scenarios, where discriminative ranking is essential. To address this gap, we propose a Gated Hybrid Collaborative Filtering framework that integrates review-derived representations into an autoencoder-based collaborative model. The architecture injects semantic signals layer-wise through an adaptive gating mechanism that dynamically balances collaborative embeddings and topic-based features during encoding. To further refine the latent space, we introduce a contrastive learning module that aligns semantic and collaborative signals. We evaluate the framework across five distinct configurations: Pure collaborative; Topic and Gated; Text and Gated; and the addition of contrastive objectives (Contrastive and Topic, and Contrastive and Text). To explicitly optimize ranking behavior, the model is trained with a pairwise Bayesian personalized ranking objective, which promotes separation between relevant and non-relevant items in the latent space. Experiments on Amazon Movies & TV, IMDb, and Rotten Tomatoes demonstrate consistent improvements in hit rate @10 and normalized discounted cumulative gain @10 over state-of-the-art review-aware baselines. Results highlight the importance of controlled semantic fusion for ranking-driven recommendation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.