ArXiv TLDR

HERCULES: Hardware-Efficient, Robust, Continual Learning Neural Architecture Search

🐦 Tweet
2605.04103

Matteo Gambella, Fabrizio Pittorino, Manuel Roveri

cs.LGcs.ARcs.CLcs.CVcs.NE

TLDR

HERCULES introduces a new framework and taxonomy for Neural Architecture Search, integrating hardware efficiency, robustness, and continual learning for deployable AI.

Key contributions

  • Proposes a taxonomy for NAS, integrating efficiency, robustness, and continual learning objectives.
  • Introduces HERCULES, a framework mapping current NAS methods through this triple lens.
  • Defines "twelve labours" (desiderata) for multi-objective NAS, balancing exploration and cost.
  • Identifies research gaps and outlines a roadmap for deployable, lifelong-learning AI systems.

Why it matters

This paper is crucial as it shifts NAS beyond just efficiency, addressing the critical need for robust and continually learning AI systems in real-world deployments. It provides a unified perspective and a roadmap for developing truly deployable, lifelong-learning AI.

Original Abstract

Neural Architecture Search (NAS) has emerged as a powerful framework for automatically discovering neural architectures that balance accuracy and efficiency. However, as AI transitions from static benchmarks to real-world deployment, the traditional focus on hardware-aware efficiency is no longer sufficient. We observe that modern NAS methods, especially those that target edge AI, are evolving to address a triple objective: Efficiency, Robustness, and Continual Learning. While efficiency ensures feasibility in resource-constrained environments, robustness guarantees reliability under environmental variabilities, and continual learning enables adaptation to sequential tasks without catastrophic forgetting. We propose a taxonomy of NAS approaches through this triple lens, distinguishing between methods targeting resource optimization, environmental resilience, and architectural plasticity. This unified perspective reveals that these axes, though often studied in isolation, are mutually reinforcing. Building on this taxonomy, we map the current landscape of these NAS methods into a new framework called Hardware-Efficient, Robust, and ContinUal LEarning Search (HERCULES). We define the desiderata, the twelve labours of HERCULES, addressing the non-trivial challenge of balancing an adequate search-space exploration with the immense computational costs of a multi-objective NAS, accounting for these crucial objectives of current AI systems. By identifying critical gaps in existing research, this survey outlines a roadmap toward integrated algorithmic, architectural, and hardware-software co-design for truly deployable, lifelong-learning AI systems.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.