ArXiv TLDR

FB-NLL: A Feature-Based Approach to Tackle Noisy Labels in Personalized Federated Learning

🐦 Tweet
2604.19729

Abdulmoneam Ali, Ahmed Arafa

cs.LGcs.ITeess.SP

TLDR

FB-NLL is a feature-based personalized federated learning framework that robustly clusters users and corrects noisy labels by leveraging feature space geometry.

Key contributions

  • Decouples user clustering from iterative training using feature space spectral structure.
  • Performs one-shot, label-agnostic user grouping based on subspace similarity, reducing overhead.
  • Mitigates noisy labels within clusters via feature-consistency and class-specific feature subspaces.

Why it matters

Personalized Federated Learning (PFL) is crucial for learning tailored models on decentralized data, but noisy labels hinder its effectiveness. FB-NLL offers a robust, efficient solution by leveraging feature geometry for clustering and noise correction. This improves personalization accuracy and stability, making PFL more practical for real-world applications.

Original Abstract

Personalized Federated Learning (PFL) aims to learn multiple task-specific models rather than a single global model across heterogeneous data distributions. Existing PFL approaches typically rely on iterative optimization-such as model update trajectories-to cluster users that need to accomplish the same tasks together. However, these learning-dynamics-based methods are inherently vulnerable to low-quality data and noisy labels, as corrupted updates distort clustering decisions and degrade personalization performance. To tackle this, we propose FB-NLL, a feature-centric framework that decouples user clustering from iterative training dynamics. By exploiting the intrinsic heterogeneity of local feature spaces, FB-NLL characterizes each user through the spectral structure of the covariances of their feature representations and leverages subspace similarity to identify task-consistent user groupings. This geometry-aware clustering is label-agnostic and is performed in a one-shot manner prior to training, significantly reducing communication overhead and computational costs compared to iterative baselines. Complementing this, we introduce a feature-consistency-based detection and correction strategy to address noisy labels within clusters. By leveraging directional alignment in the learned feature space and assigning labels based on class-specific feature subspaces, our method mitigates corrupted supervision without requiring estimation of stochastic noise transition matrices. In addition, FB-NLL is model-independent and integrates seamlessly with existing noise-robust training techniques. Extensive experiments across diverse datasets and noise regimes demonstrate that our framework consistently outperforms state-of-the-art baselines in terms of average accuracy and performance stability.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.