Data-Free Contribution Estimation in Federated Learning using Gradient von Neumann Entropy
Asim Ukaye, Mubarak Abdu-Aguye, Nurbek Tastan, Karthik Nandakumar
TLDR
This paper introduces a data-free method for estimating client contributions in Federated Learning using gradient von Neumann entropy, improving fairness and privacy.
Key contributions
- Introduces a data-free signal for client contribution using gradient von Neumann entropy of final-layer updates.
- Presents SpectralFed, which uses normalized entropy to weight client contributions during aggregation.
- Develops SpectralFuse, combining entropy with class-specific alignment via a rank-adaptive Kalman filter.
- Demonstrates high correlation between entropy scores and client accuracy on CIFAR, FEMNIST, and FedISIC.
Why it matters
Current FL contribution methods compromise privacy or are manipulable. This paper offers a novel data-free approach using spectral entropy, enabling fair rewards without sensitive data. It improves the robustness and trustworthiness of federated learning systems.
Original Abstract
Client contribution estimation in Federated Learning is necessary for identifying clients' importance and for providing fair rewards. Current methods often rely on server-side validation data or self-reported client information, which can compromise privacy or be susceptible to manipulation. We introduce a data-free signal based on the matrix von Neumann (spectral) entropy of the final-layer updates, which measures the diversity of the information contributed. We instantiate two practical schemes: (i) SpectralFed, which uses normalized entropy as aggregation weights, and (ii) SpectralFuse, which fuses entropy with class-specific alignment via a rank-adaptive Kalman filter for per-round stability. Across CIFAR-10/100 and the naturally partitioned FEMNIST and FedISIC benchmarks, entropy-derived scores show a consistently high correlation with standalone client accuracy under diverse non-IID regimes - without validation data or client metadata. We compare our results with data-free contribution estimation baselines and show that spectral entropy serves as a useful indicator of client contribution.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.