When Switching Algorithms Helps: A Theoretical Study of Online Algorithm Selection
TLDR
This paper theoretically demonstrates that switching between two specific evolutionary algorithms can solve OneMax problem asymptotically faster than using either alone.
Key contributions
- Provides the first theoretical example of asymptotic speedup for online algorithm selection.
- Demonstrates switching (1+λ) EA and (1+(λ,λ)) GA solves OneMax in O(n log log n).
- Achieves faster runtime than either algorithm alone, even with optimal tuning.
- Proposes a realistic switching strategy that matches the idealized optimal performance.
Why it matters
This paper provides the first theoretical evidence for the long-held empirical belief that switching algorithms can accelerate optimization. It offers a crucial step towards understanding when and how to effectively combine algorithms, paving the way for more robust and efficient optimization strategies.
Original Abstract
Online algorithm selection (OAS) aims to adapt the optimization process to changes in the fitness landscape and is expected to outperform any single algorithm from a given portfolio. Although this expectation is supported by numerous empirical studies, there are currently no theoretical results proving that OAS can yield asymptotic speedups (apart from some artificial examples for hyper-heuristics). Moreover, theory-based guidelines for when and how to switch between algorithms are largely missing. In this paper, we present the first theoretical example in which switching between two algorithms -- the $(1+λ)$ EA and the $(1+(λ,λ))$ GA -- solves the OneMax problem asymptotically faster than either algorithm used in isolation. We show that an appropriate choice of population sizes for the two algorithms allows the optimum to be reached in $O(n\log\log n)$ expected time, faster than the $Θ(n\sqrt{\frac{\log n \log\log\log n}{\log\log n}})$ runtime of the best of these two algorithms with optimally tuned parameters. We first establish this bound under an idealized switching rule that changes from the $(1+λ)$ to the $(1+(λ,λ))$ GA at the optimal time. We then propose a realistic switching strategy that achieves the same performance. Our analysis combines fixed-start and fixed-target perspectives, illustrating how different algorithms dominate at different stages of the optimization process. This approach offers a promising path toward a deeper theoretical understanding of OAS.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.