A Note on How to Remove the $\ln\ln T$ Term from the Squint Bound
TLDR
This note shows how to remove the $\ln\ln T$ term from the Squint algorithm's data-independent bound by changing the prior.
Key contributions
- Shifted KT potentials previously removed $\ln\ln T$ from parameter-free learning with expert bounds.
- This removal method is shown to be equivalent to changing the prior in the Krichevsky--Trofimov algorithm.
- The same technique is then applied to eliminate the $\ln\ln T$ factor from the Squint algorithm's data-independent bound.
Why it matters
This paper refines the theoretical bounds for the Squint algorithm by eliminating a problematic $\ln\ln T$ term. Tighter bounds are crucial for improving the efficiency and reliability of online learning algorithms, providing more precise performance guarantees.
Original Abstract
In Orabona and Pál [2016], we introduced the shifted KT potentials, to remove the $\ln \ln T$ factor in the parameter-free learning with expert bound. In this short technical note, I show that this is equivalent to changing the prior in the Krichevsky--Trofimov algorithm. Then, I show how to use the same idea to remove the $\ln \ln T$ factor in the data-independent bound for the Squint algorithm.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.