Online learning with noisy side observations
Tomáš Kocák, Gergely Neu, Michal Valko
TLDR
This paper introduces a new online learning model with noisy side observations and an efficient, parameter-free algorithm achieving $\widetilde{O}(\sqrt{\alpha^* T})$ regret.
Key contributions
- Introduces a new partial-observability model for online learning with noisy side observations.
- Models problem structure with a weighted directed graph, where edge weights reflect feedback quality.
- Presents an efficient, parameter-free algorithm achieving $\widetilde{O}(\sqrt{\alpha^* T})$ regret.
- Defines a novel graph property, the 'effective independence number' ($\alpha^*$), crucial for regret bounds.
Why it matters
This work addresses online learning in realistic scenarios with noisy side observations, a common challenge. Its parameter-free algorithm offers practical advantages and generalizes prior partial-observability models, achieving near-optimal regret.
Original Abstract
We propose a new partial-observability model for online learning problems where the learner, besides its own loss, also observes some noisy feedback about the other actions, depending on the underlying structure of the problem. We represent this structure by a weighted directed graph, where the edge weights are related to the quality of the feedback shared by the connected nodes. Our main contribution is an efficient algorithm that guarantees a regret of $\widetilde{O}(\sqrt{α^* T})$ after $T$ rounds, where $α^*$ is a novel graph property that we call the effective independence number. Our algorithm is completely parameter-free and does not require knowledge (or even estimation) of $α^*$. For the special case of binary edge weights, our setting reduces to the partial-observability models of Mannor and Shamir (2011) and Alon et al. (2013) and our algorithm recovers the near-optimal regret bounds.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.