Rethinking Semantic Collaborative Integration: Why Alignment Is Not Enough
Maolin Wang, Dongze Wu, Jianing Zhou, Hongyu Chen, Beining Bao + 5 more
TLDR
This paper argues that aligning semantic and collaborative representations in recommenders is insufficient, proposing a complementarity-aware fusion approach instead.
Key contributions
- Critiques the prevailing "global low-complexity alignment hypothesis" for LLM-enhanced recommenders.
- Proposes a "shared-plus-private" latent structure, treating views as heterogeneous with unique factors.
- Develops diagnostics to quantify view overlap, unique contributions, and theoretical fusion gains.
- Empirical analysis reveals low item-level agreement and strong complementarity, challenging alignment.
Why it matters
This paper redefines how LLM-derived semantics should integrate with collaborative filtering. By highlighting the limitations of simple alignment and advocating for complementarity, it provides a principled foundation for building more effective and robust next-generation recommender systems that leverage both shared and unique signals.
Original Abstract
Large language models (LLMs) have become an important semantic infrastructure for modern recommender systems. A prevailing paradigm integrates LLM-derived semantic embeddings with collaborative representations via representation alignment, implicitly assuming that the two views encode a shared latent entity and that stronger alignment yields better results. We formalize this assumption as the global low-complexity alignment hypothesis and argue that it is stronger than necessary and often structurally mismatched with real-world recommendation settings. We propose a complementary perspective in which semantic and collaborative representations are treated as partially shared yet fundamentally heterogeneous views, each containing both shared and view-specific factors. Under this shared-plus-private latent structure, enforcing global geometric alignment may distort local structure, suppress view-specific signals, and reduce informational diversity. To support this perspective, we develop complementarity-aware diagnostics that quantify overlap, unique-hit contribution, and theoretical fusion upper bounds. Empirical analyses on sparse recommendation benchmarks reveal low item-level agreement between semantic and collaborative views and substantial oracle fusion gains, indicating strong complementarity. Furthermore, controlled alignment probes show that low-capacity mappings capture only shared components and fail to recover full collaborative geometry, especially under distribution shift. These findings suggest that alignment should not be treated as the default integration principle. We advocate a shift from alignment-centric modeling to complementarity fusion-centric, complementarity-aware design, where shared factors are selectively integrated while private signals are preserved. This reframing provides a principled foundation for the next generation of LLM-enhanced recommender systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.