ArXiv TLDR

Lifecycle-Aware Federated Continual Learning in Mobile Autonomous Systems

🐦 Tweet
2604.20745

Beining Wu, Jun Huang

cs.LGcs.CV

TLDR

A lifecycle-aware FCL framework for mobile autonomous systems uses dual-timescale strategies to prevent immediate forgetting and recover from long-term drift.

Key contributions

  • Proposes a dual-timescale FCL framework addressing both immediate forgetting and long-term cumulative drift.
  • Designs a layer-selective rehearsal strategy to mitigate immediate forgetting during local training.
  • Introduces a rapid knowledge recovery strategy to restore models after long-term degradation.
  • Achieves up to 8.3% mIoU improvement and validated on a real-world rover testbed.

Why it matters

Current FCL struggles with layer-specific forgetting and long-term drift in mobile autonomous fleets. This paper introduces a robust dual-timescale framework, significantly improving model adaptation and reliability, validated on a real rover.

Original Abstract

Federated continual learning (FCL) allows distributed autonomous fleets to adapt collaboratively to evolving terrain types across extended mission lifecycles. However, current approaches face several key challenges: 1) they use uniform protection strategies that do not account for the varying sensitivities to forgetting on different network layers; 2) they focus primarily on preventing forgetting during training, without addressing the long-term effects of cumulative drift; and 3) they often depend on idealized simulations that fail to capture the real-world heterogeneity present in distributed fleets. In this paper, we propose a lifecycle-aware dual-timescale FCL framework that incorporates training-time (pre-forgetting) prevention and (post-forgetting) recovery. Under this framework, we design a layer-selective rehearsal strategy that mitigates immediate forgetting during local training, and a rapid knowledge recovery strategy that restores degraded models after long-term cumulative drift. We present a theoretical analysis that characterizes heterogeneous forgetting dynamics and establishes the inevitability of long-term degradation. Our experimental results show that this framework achieves up to 8.3\% mIoU improvement over the strongest federated baseline and up to 31.7\% over conventional fine-tuning. We also deploy the FCL framework on a real-world rover testbed to assess system-level robustness under realistic constraints; the testing results further confirm the effectiveness of our FCL design.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.