Feature-Label Modal Alignment for Robust Partial Multi-Label Learning
Yu Chen, Weijun Lv, Yue Huang, Xiaozhao Fang, Jie Wen + 2 more
TLDR
PML-MA aligns features and labels as modalities, using pseudo-labels and prototype learning to robustly handle noisy data in partial multi-label learning.
Key contributions
- Introduces PML-MA, a robust partial multi-label learning method using feature-label alignment.
- Generates pseudo-labels via low-rank decomposition to filter noisy candidate labels.
- Aligns features and pseudo-labels in a common subspace, preserving local neighborhood structures.
- Employs multi-peak class prototype learning for discriminability using soft pseudo-label weights.
Why it matters
Noisy labels severely impact partial multi-label learning. PML-MA offers a robust solution by aligning features and labels as modalities, generating clean pseudo-labels. It significantly boosts accuracy and robustness.
Original Abstract
In partial multi-label learning (PML), each instance is associated with a set of candidate labels containing both ground-truth and noisy labels. The presence of noisy labels disrupts the correspondence between features and labels, degrading classification performance. To address this challenge, we propose a novel PML method based on feature-label modal alignment (PML-MA), which treats features and labels as two complementary modalities and restores their consistency through systematic alignment. Specifically, PML-MA first employs low-rank orthogonal decomposition to generate pseudo-labels that approximate the true label distribution by filtering noisy labels. It then aligns features and pseudo-labels through both global projection into a common subspace and local preservation of neighborhood structures. Finally, a multi-peak class prototype learning mechanism leverages the multi-label nature where instances simultaneously belong to multiple categories, using pseudo-labels as soft membership weights to enhance discriminability. By integrating modal alignment with prototype-guided refinement, PML-MA ensures pseudo-labels better reflect the true distribution while maintaining robustness against label noise. Extensive experiments on both real-world and synthetic datasets demonstrate that PML-MA significantly outperforms state-of-the-art methods, achieving superior classification accuracy and noise robustness.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.