ArXiv TLDR

Modular Representation Compression: Adapting LLMs for Efficient and Effective Recommendations

🐦 Tweet
2604.18146

Yunjia Xi, Menghui Zhu, Jianghao Lin, Bo Chen, Ruiming Tang + 2 more

cs.IRcs.AIcs.CL

TLDR

MARC compresses LLM representations for recommendation systems by addressing mid-layer advantage, improving efficiency and effectiveness.

Key contributions

  • Identified Mid-layer Representation Advantage (MRA) in LLMs, where mid-layers outperform final layers for recommendations.
  • Proposed Modular Representation Compression (MARC) to address MRA by explicitly controlling LLM's internal modularity.
  • MARC employs Modular Adjustment and Task Decoupling to create efficient, task-specific representation modules.
  • Achieved a 2.82% eCPM lift in an online A/B test for a large-scale commercial search advertising scenario.

Why it matters

LLMs enhance recommendation systems but face high-dimensional representation costs. This paper identifies 'Mid-layer Representation Advantage' and proposes MARC to efficiently compress LLM outputs. MARC makes LLM-based recommenders practical and effective, validated by significant online performance gains.

Original Abstract

Recently, large language models (LLMs) have advanced recommendation systems (RSs), and recent works have begun to explore how to integrate LLMs into industrial RSs. While most approaches deploy LLMs offline to generate and pre-cache augmented representations for RSs, high-dimensional representations from LLMs introduce substantial storage and computational costs. Thus, it is crucial to compress LLM representations effectively. However, we identify a counterintuitive phenomenon during representation compression: Mid-layer Representation Advantage (MRA), where representations from middle layers of LLMs outperform those from final layers in recommendation tasks. This degraded final layer renders existing compression methods, which typically compress on the final layer, suboptimal. We interpret this based on modularity theory that LLMs develop spontaneous internal functional modularity and force the final layer to specialize in the proxy training task. Thus, we propose \underline{M}odul\underline{a}r \underline{R}epresentation \underline{C}ompression (MARC) to explicitly control the modularity of LLMs. First, Modular Adjustment explicitly introduces compression and task adaptation modules, enabling the LLM to operate strictly as a representation-learning module. Next, to ground each module to its specific task, Modular Task Decoupling uses information constraints and different network structures to decouple tasks. Extensive experiments validate that MARC addresses MRA and produces efficient representations. Notably, MARC achieved a 2.82% eCPM lift in an online A/B test within a large-scale commercial search advertising scenario.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.