Adaptive multi-fidelity optimization with fast learning rates
Come Fiegel, Victor Gabillon, Michal Valko
TLDR
Kometo is a new adaptive multi-fidelity optimization algorithm that achieves fast learning rates and improves upon prior guarantees without needing problem-specific knowledge.
Key contributions
- Proves lower bounds for simple regret in multi-fidelity optimization based on cost-to-bias functions.
- Introduces Kometo, an algorithm achieving fast learning rates without prior knowledge of function smoothness.
- Kometo empirically outperforms existing multi-fidelity methods, improving on previous theoretical guarantees.
Why it matters
Multi-fidelity optimization is crucial for efficiently optimizing expensive functions. This paper introduces a robust algorithm that adapts without prior knowledge, making it more practical and broadly applicable than previous methods. It significantly advances the field by improving learning rates.
Original Abstract
In multi-fidelity optimization, biased approximations of varying costs of the target function are available. This paper studies the problem of optimizing a locally smooth function with a limited budget, where the learner has to make a tradeoff between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions, and improves previously proven guarantees. We finally empirically show that our algorithm outperforms previous multi-fidelity optimization methods without the knowledge of problem-dependent parameters.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.