ArXiv TLDR

Quantization Impact on the Accuracy and Communication Efficiency Trade-off in Federated Learning for Aerospace Predictive Maintenance

🐦 Tweet
2604.08474

Abdelkarim Loukili

cs.LG

TLDR

This paper shows INT4 quantization in federated learning for aerospace predictive maintenance achieves 8x communication reduction with no accuracy loss.

Key contributions

  • INT4 quantization in FL for aerospace predictive maintenance reduces communication 8x with no accuracy loss.
  • A custom 1D CNN (AeroConv1D) was trained via FL on NASA C-MAPSS under realistic Non-IID client partitions.
  • Non-IID evaluation is critical, revealing extreme quantization instability; INT2 is unsuitable due to high variance.
  • FPGA projections confirm INT4 fits hardware, enabling a complete FL pipeline on a single System-on-Chip.

Why it matters

This paper addresses the critical challenge of communication overhead in federated learning for aerospace predictive maintenance on bandwidth-limited IoT devices. By demonstrating that INT4 quantization maintains accuracy while drastically reducing communication, it enables practical, privacy-preserving FL deployments in real-world aerospace fleets, leading to more efficient and reliable maintenance.

Original Abstract

Federated learning (FL) enables privacy-preserving predictive maintenance across distributed aerospace fleets, but gradient communication overhead constrains deployment on bandwidth-limited IoT nodes. This paper investigates the impact of symmetric uniform quantization ($b \in \{32,8,4,2\}$ bits) on the accuracy--efficiency trade-off of a custom-designed lightweight 1-D convolutional model (AeroConv1D, 9\,697 parameters) trained via FL on the NASA C-MAPSS benchmark under a realistic Non-IID client partition. Using a rigorous multi-seed evaluation ($N=10$ seeds), we show that INT4 achieves accuracy \emph{statistically indistinguishable} from FP32 on both FD001 ($p=0.341$) and FD002 ($p=0.264$ MAE, $p=0.534$ NASA score) while delivering an $8\times$ reduction in gradient communication cost (37.88~KiB $\to$ 4.73~KiB per round). A key methodological finding is that naïve IID client partitioning artificially suppresses variance; correct Non-IID evaluation reveals the true operational instability of extreme quantization, demonstrated via a direct empirical IID vs.\ Non-IID comparison. INT2 is empirically characterized as unsuitable: while it achieves lower MAE on FD002 through extreme quantization-induced over-regularization, this apparent gain is accompanied by catastrophic NASA score instability (CV\,=\,45.8\% vs.\ 22.3\% for FP32), confirming non-reproducibility under heterogeneous operating conditions. Analytical FPGA resource projections on the Xilinx ZCU102 confirm that INT4 fits within hardware constraints (85.5\% DSP utilization), potentially enabling a complete FL pipeline on a single SoC. The full simulation codebase and FPGA estimation scripts are publicly available at https://github.com/therealdeadbeef/aerospace-fl-quantization.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.