Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling
Hailing Cheng, Daqi Sun, Xinyu Lu
TLDR
This paper introduces SIREN-RoPE, a novel approach that treats the rotation manifold in Rotary Positional Embeddings as a learnable, signal-conditioned space, improving sequential modeling.
Key contributions
- Proposes treating RoPE's rotation manifold as a learnable, signal-conditioned space for attention mechanisms.
- Introduces SIREN-RoPE, using a dual-branch SIREN to encode diverse temporal and semantic signals.
- Demonstrates consistent improvements in ranking and calibration on a production-scale news feed dataset.
- Highlights the rotation space as an untapped dimension for attention architectures.
Why it matters
This paper redefines Rotary Positional Embeddings by making the rotation space learnable, unlocking a new dimension of expressivity in Transformer architectures. This approach, demonstrated by SIREN-RoPE, significantly enhances sequential modeling performance with minimal computational cost, offering a fresh perspective on attention mechanisms.
Original Abstract
Every Transformer architecture dedicates enormous capacity to learning rich representations in semantic embedding space -- yet the rotation manifold acted upon by Rotary Positional Embeddings (RoPE) has been treated as a fixed, hand-crafted structure, populated only by discrete ordinal indices. We argue that this rotation space is a largely overlooked second dimension of expressivity in the attention mechanism, one whose systematic exploration may open a new door for attention-based architectures. The analogy to complex numbers is instructive: just as introducing the imaginary axis -- orthogonal to and independent of the real line -- unlocked new algebraic structure once believed impossible, treating the rotation manifold as a learnable, signal-conditioned space opens an orthogonal degree of freedom in attention. In this framing, the token embedding encodes the semantic (real) component of a representation -- what a token means -- while the rotation encodes its dynamic (imaginary) component -- how it relates to every other token across time, position, and context. We introduce SIREN-RoPE, a concrete instantiation of this idea, which populates the rotation dimension with heterogeneous signals -- continuous timestamps, cyclical temporal patterns, and categorical metadata -- via a dual-branch Sinusoidal Representation Network (SIREN). As a proof of concept, we evaluate on a production-scale news feed dataset from a major social network using a generative recommender as the ranking model, demonstrating that activating this hidden dimension yields consistent improvements across calibration and ranking objectives with negligible computational overhead. We invite the community to view the rotation space not as a solved positional-encoding detail, but as an untapped axis whose rich structure may prove as consequential for attention as the imaginary unit proved for algebra.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.