ArXiv TLDR

RoTE: Coarse-to-Fine Multi-Level Rotary Time Embedding for Sequential Recommendation

🐦 Tweet
2604.13389

Haolin Zhang, Longtao Xiao, Guohao Cai, Ruixuan Li, Xiu Li

cs.IR

TLDR

RoTE enhances sequential recommendation by explicitly modeling multi-level time spans, improving temporal dynamics capture.

Key contributions

  • Models multi-level time spans in sequential recommendation, addressing a key limitation.
  • Decomposes interaction timestamps into coarse-to-fine temporal granularities.
  • Integrates these multi-level temporal representations into item embeddings.
  • A lightweight, plug-and-play module for Transformer models, improving NDCG@5 by up to 20.11%.

Why it matters

Existing sequential recommenders overlook critical time spans, hindering accurate user interest evolution. RoTE explicitly models multi-level temporal dynamics, significantly boosting recommendation accuracy and offering a plug-and-play solution.

Original Abstract

Sequential recommendation models have been widely adopted for modeling user behavior. Existing approaches typically construct user interaction sequences by sorting items according to timestamps and then model user preferences from historical behaviors. While effective, such a process only considers the order of temporal information but overlooks the actual time spans between interactions, resulting in a coarse representation of users' temporal dynamics and limiting the model's ability to capture long-term and short-term interest evolution. To address this limitation, we propose RoTE, a novel multi-level temporal embedding module that explicitly models time span information in sequential recommendation. RoTE decomposes each interaction timestamp into multiple temporal granularities, ranging from coarse to fine, and incorporates the resulting temporal representations into item embeddings. This design enables models to capture heterogeneous temporal patterns and better perceive temporal distances among user interactions during sequence modeling. RoTE is a lightweight, plug-and-play module that can be seamlessly integrated into existing Transformer-based sequential recommendation models without modifying their backbone architectures. We apply RoTE to several representative models and conduct extensive experiments on three public benchmarks. Experimental results demonstrate that RoTE consistently enhances the corresponding backbone models, achieving up to a 20.11% improvement in NDCG@5, which confirms the effectiveness and generality of the proposed approach. Our code is available at https://github.com/XiaoLongtaoo/RoTE.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.