ArXiv TLDR

CapsID: Soft-Routed Variable-Length Semantic IDs for Generative Recommendation

🐦 Tweet
2605.05096

Wenzhuo Cheng, Menghang Gong, Qixin Guo, Hang Zheng, Zhaobin Yang + 2 more

cs.IR

TLDR

CapsID introduces soft-routed, variable-length Semantic IDs for generative recommendation, significantly improving recall and efficiency over existing methods.

Key contributions

  • Replaces hard residual quantization with capsule routing for more robust item semantics.
  • Introduces variable-length SIDs that terminate based on active capsule confidence.
  • SemanticBPE composes SID tokens into subwords using co-occurrence and embedding compatibility.
  • Achieves 9.6% Recall@10 improvement and 51% lower inference latency than strong baselines.

Why it matters

This paper tackles a core bottleneck in generative recommendation by enhancing the tokenizer. CapsID uses soft-routed, variable-length Semantic IDs for more nuanced and efficient item representation. This improves recommendation quality and significantly cuts inference costs, making generative recommenders more practical.

Original Abstract

Generative recommendation maps each item to a sequence of Semantic IDs (SIDs) and recasts retrieval as autoregressive token generation. In this paradigm the main bottleneck is the tokenizer rather than the Transformer: residual vector quantization with a hard nearest-neighbor assignment at every layer collapses multi-faceted item semantics at cluster boundaries and propagates early errors to later SID positions. A common workaround is to append a dense vector or attribute prefix to the SID, but this dual-representation design inflates inference cost and gives up the simplicity of a generative interface. We address the bottleneck at the tokenizer itself. CAPSID replaces hard residual quantization with capsule routing: at each layer an item probabilistically routes to several semantic capsules, the residual is updated by the routed reconstruction rather than by a single winning code, and the SID terminates once the active capsule's confidence is high enough. On top of CAPSID, SEMANTICBPE composes adjacent SID tokens into reusable subwords by combining their co-occurrence with their embedding compatibility. On Amazon Beauty, Sports, Toys, and a 35M-item proprietary industrial catalog, CAPSID+SEMANTICBPE improves Recall at 10 by 9.6% on average over ReSID, the strongest single-representation baseline, and matches or exceeds a COBRA-style sparse-dense system on every public benchmark while running at 51% of its inference latency. Ablations show that soft routing, iterative agreement, and confidence-driven length each contribute independently, and the gains are largest on tail items where boundary semantics dominate.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.