ArXiv TLDR

Simultaneous Long-tailed Recognition and Multi-modal Fusion for Highly Imbalanced Multi-modal Data

🐦 Tweet
2605.10498

Heegeon Yoon, Heeyoung Kim

cs.CVcs.AIstat.ML

TLDR

This paper introduces a multi-modal fusion framework for long-tailed recognition in class-imbalanced data, outperforming single-modal methods.

Key contributions

  • Proposes a multi-modal fusion framework for long-tailed recognition.
  • Extends multi-expert architectures to dynamically fuse heterogeneous data.
  • Uses confidence-guided weights to prioritize more informative modalities.
  • Designs specialized training and test procedures for diverse modality combinations.

Why it matters

Deep learning models struggle with long-tailed, class-imbalanced data, especially when multi-modal inputs are involved. This work provides a robust solution by effectively integrating diverse data sources. It significantly improves performance in challenging real-world scenarios.

Original Abstract

Long-tailed distributions in class-imbalanced data present a fundamental challenge for deep learning models, which tend to be biased toward majority classes. While recent methods for long-tailed recognition have mitigated this issue, they are largely restricted to single-modal inputs and cannot fully exploit complementary information from diverse data sources. In this work, we introduce a new framework for long-tailed recognition that explicitly handles multi-modal inputs. Our approach extends multi-expert architectures to the multi-modal setting by fusing heterogeneous data into a unified representation while leveraging modality-specific networks to estimate the informativeness of each modality. These confidence-guided weights dynamically modulate the fusion process, ensuring that more informative modalities contribute more strongly to the final decision. To further enhance performance, we design specialized training and test procedures that accommodate diverse modality combinations, including images and tabular data. Extensive experiments on benchmark and real-world datasets demonstrate that the proposed approach not only effectively integrates multi-modal information but also outperforms existing methods in handling long-tailed, class-imbalanced scenarios, highlighting its robustness and generalization capability.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.