ArXiv TLDR

AFMRL: Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning in E-commerce

🐦 Tweet
2604.20135

Biao Zhang, Lixin Chen, Bin Zhang, Zongwei Wang, Tong Liu + 1 more

cs.CLcs.IR

TLDR

AFMRL uses MLLMs to extract attributes for fine-grained multimodal representation learning, achieving SOTA in e-commerce retrieval.

Key contributions

  • Proposes AFMRL for fine-grained multimodal representation learning in e-commerce.
  • Leverages MLLMs to extract key attributes from product images and text.
  • Introduces Attribute-Guided Contrastive Learning (AGCL) for robust training.
  • Employs Retrieval-aware Attribute Reinforcement (RAR) to refine attribute generation.

Why it matters

AFMRL addresses the e-commerce problem of distinguishing similar products. It uses MLLMs for fine-grained attribute generation, enhancing multimodal representation learning. This achieves SOTA retrieval, benefiting online platforms.

Original Abstract

Multimodal representation is crucial for E-commerce tasks such as identical product retrieval. Large representation models (e.g., VLM2Vec) demonstrate strong multimodal understanding capabilities, yet they struggle with fine-grained semantic comprehension, which is essential for distinguishing highly similar items. To address this, we propose Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning (AFMRL), which defines product fine-grained understanding as an attribute generation task. It leverages the generative power of Multimodal Large Language Models (MLLMs) to extract key attributes from product images and text, and enhances representation learning through a two-stage training framework: 1) Attribute-Guided Contrastive Learning (AGCL), where the key attributes generated by the MLLM are used in the image-text contrastive learning training process to identify hard samples and filter out noisy false negatives. 2) Retrieval-aware Attribute Reinforcement (RAR), where the improved retrieval performance of the representation model post-attribute integration serves as a reward signal to enhance MLLM's attribute generation during multimodal fine-tuning. Extensive experiments on large-scale E-commerce datasets demonstrate that our method achieves state-of-the-art performance on multiple downstream retrieval tasks, validating the effectiveness of harnessing generative models to advance fine-grained representation learning.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.