ArXiv TLDR

PRISM: Refracting the Entangled User Behavior Space for E-Commerce Search

🐦 Tweet
2605.07296

Haoqian Zhang, Ziyuan Yang, Yi Zhang

cs.IR

TLDR

PRISM disentangles user preference and item relevance in e-commerce search by explicitly modeling their interaction, improving robustness and semantic consistency.

Key contributions

  • Explicitly models interaction between user preference and item relevance.
  • Introduces a preference rectification module for robust preference estimation.
  • Uses LLM-driven semantic anchoring to calibrate relevance representations.
  • Adaptive evidence routing aggregates multi-source signals for context-aware relevance.

Why it matters

E-commerce search suffers from entangled user behavior signals, leading to confounding and misalignment. PRISM addresses this by robustly modeling preference-relevance interactions, leading to more accurate and semantically consistent search results. This improves the reliability of downstream ranking models.

Original Abstract

E-commerce search systems rely on modeling user behavior to estimate item relevance and user preference, which are typically assumed to be stable and independently learnable signals. However, in practice, user interactions are jointly shaped by exposure mechanisms, feedback loops, and semantic matching, leading to entangled and dynamically drifting behavioral signals. As a result, both preference estimation and relevance modeling suffer from confounding effects and semantic misalignment, which limits the robustness of downstream ranking models. To address this issue, we propose PRISM, a Preference-Relevance Interaction Semantic Modeling framework for e-commerce search behavior prediction. PRISM explicitly models the interaction between user preference and item relevance rather than treating them as independent components. Specifically, it introduces a preference rectification module to iteratively refine user preference under relevance-aware constraints, improving robustness against behavioral confounding. To ensure semantic consistency, we further incorporate a large language model (LLM)-driven semantic anchoring mechanism that leverages positive and negative prototypes to calibrate relevance representations. Finally, a preference-conditioned evidence routing module adaptively aggregates multi-source behavioral signals, enabling context-aware and preference-aligned relevance estimation. Extensive experiments on two public e-commerce benchmarks demonstrate that PRISM consistently outperforms strong baselines, validating the effectiveness of explicitly modeling preference-relevance interaction for robust and semantically grounded search behavior modeling.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.