ArXiv TLDR

Online learning with Erdős-Rényi side-observation graphs

🐦 Tweet
2604.25271

Tomáš Kocák, Gergely Neu, Michal Valko

stat.MLcs.LG

TLDR

This paper introduces two novel algorithms for multi-armed bandits with probabilistic side observations, achieving near-optimal regret bounds for unknown observation rates.

Key contributions

  • Proposes two algorithms for adversarial multi-armed bandits with probabilistic side observations.
  • Achieves O(sqrt((T/r) log N)) regret for larger observation probabilities (r).
  • Achieves O(sqrt((T/r) log (N+T))) regret for smaller observation probabilities (r).
  • Includes a quick procedure to estimate the unknown observation probability 'r'.

Why it matters

This work advances online learning by providing robust algorithms for multi-armed bandits where side information is probabilistically revealed. The proposed methods achieve near-optimal performance, even when the observation probability is unknown, significantly improving practical applicability.

Original Abstract

We consider adversarial multi-armed bandit problems where the learner is allowed to observe losses of a number of arms beside the arm that it actually chose. We study the case where all non-chosen arms reveal their loss with a fixed but unknown probability $r$, independently of each other and the action of the learner. We propose two algorithms that work for different ranges of $r$. We show that after $T$ rounds in a bandit problem with $N$ arms, the expected regret of our first algorithm is $O(\sqrt{(T /r) \log N })$ whenever $r\ge(\log T)/(2N)$, while our second algorithm achieves a regret of $O(\sqrt{(T/r) \log (N+T)})$ for smaller values of $r$. We also give a quick estimation procedure that decides the range of~$r$. All our bounds are within logarithmic factors of the best achievable performance of any algorithm that is even allowed to know~$r$.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.