CAST: Modeling Semantic-Level Transitions for Complementary-Aware Sequential Recommendation
Qian Zhang, Lech Szymanski, Haibo Zhang, Jeremiah D. Deng
TLDR
CAST models semantic-level transitions and injects LLM-verified priors to improve sequential recommendation by capturing true complementary relations.
Key contributions
- Introduces CAST, a framework for sequential recommendation using semantic-level transitions.
- Models dynamic transitions directly in discrete semantic code space to capture fine-grained dependencies.
- Injects LLM-verified complementary priors into attention, prioritizing true complementarity.
- Achieves significant performance gains (up to 17.6% Recall) and 65x training acceleration.
Why it matters
This paper addresses a key limitation in sequential recommendation by accurately identifying true complementary item relations. By leveraging fine-grained semantic transitions and LLM-verified priors, CAST significantly improves prediction accuracy and efficiency. This advancement could lead to more relevant and personalized recommendations in e-commerce.
Original Abstract
Sequential Recommendation (SR) aims to predict the next interaction of a user based on their behavior sequence, where complementary relations often provide essential signals for predicting the next item. However, mainstream models relying on sparse co-purchase statistics often mistake spurious correlations (e.g., due to popularity bias) for true complementary relations. Identifying true complementary relations requires capturing the fine-grained item semantics (e.g., specifications) that simple cooccurrence statistics would be unable to model. While recent semantics-based methods utilize discrete semantic codes to represent items, they typically aggregate semantic codes into coarse item representations. This aggregation process blurs specific semantic details required to identify complementarity. To address these critical limitations and effectively leverage semantics for capturing reliable complementary relations, we propose a Complementary-Aware Semantic Transition (CAST) framework that introduces a new modeling paradigm built upon semantic-level transitions. Specifically, a semantic-level transition module is designed to model dynamic transitions directly in the discrete semantic code space, effectively capturing fine-grained semantic dependencies often lost in aggregated item representations. Then, a complementary prior injection module is designed to incorporate LLM-verified complementary priors into the attention mechanism, thereby prioritizing complementary patterns over co-occurrence statistics. Experiments on multiple e-commerce datasets demonstrate that CAST consistently outperforms the state-of-the-art approaches, achieving up to 17.6% Recall and 16.0% NDCG gains with 65x training acceleration. This validates its effectiveness and efficiency in uncovering latent item complementarity beyond statistics. The code will be released upon acceptance.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.