Every Preference Has Its Strength: Injecting Ordinal Semantics into LLM-Based Recommenders
Jiwon Jeong, Donghee Han, Sungrae Hong, Woosung Kang, Mun Yong Yi
TLDR
OSA is a new LLM-based recommender framework that injects ordinal preference strength into collaborative filtering signals, improving fine-grained recommendations.
Key contributions
- Addresses existing CF-LLM models that discard ordinal preference strength in user feedback.
- Introduces Ordinal Semantic Anchoring (OSA) to explicitly model preference strength.
- Uses numeric textual tokens as semantic anchors to align LLM latent space representations.
- Achieves superior performance over baselines, particularly in pairwise preference evaluation.
Why it matters
Current LLM recommenders often ignore the strength of user preferences, leading to less accurate suggestions. This paper introduces a novel framework that effectively integrates fine-grained preference strength, significantly improving recommendation quality and user experience.
Original Abstract
Recent work has shown that large language models (LLMs) can enhance recommender systems by integrating collaborative filtering (CF) signals through hybrid prompting. However, most existing CF-LLM frameworks collapse explicit ratings into implicit or positive-only feedback, discarding the ordinal structure that conveys fine-grained preference strength. As a result, these models struggle to exploit graded semantics and nuanced preference distinctions. We propose Ordinal Semantic Anchoring (OSA), a hybrid CF-LLM framework that explicitly incorporates preference strength by modeling interaction-level user feedback. OSA represents ordinal preference levels as numeric textual tokens and uses their token embeddings as semantic anchors to align user-item interaction representations in the LLM latent space. Through strength-aware alignment across ordinal levels, OSA preserves preference semantics when integrating collaborative signals with LLMs. Experiments on multiple real-world datasets demonstrate that OSA consistently outperforms existing baselines, particularly in pairwise preference evaluation, highlighting its effectiveness in modeling fine-grained user preferences over prior CF-LLM methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.