ArXiv TLDR

Vanishing L2 regularization for the softmax Multi Armed Bandit

🐦 Tweet
2605.03752

Stefana-Lucia Anita, Gabriel Turinici

cs.LGmath.STstat.ML

TLDR

This paper proves theoretical convergence for L2 regularized softmax MABs when regularization vanishes, showing numerical advantages.

Key contributions

  • Analyzes L2 regularized softmax policy gradient for Multi Armed Bandits (MABs).
  • Addresses the challenge of vanishing L2 regularization parameter in MAB convergence.
  • Provides theoretical convergence proofs for this specific vanishing regularization regime.
  • Empirically confirms numerical advantages of the L2 regularization on standard benchmarks.

Why it matters

Previous studies struggled to analyze L2 regularized softmax MABs as regularization vanishes. This paper provides theoretical convergence proofs, making L2 regularization a more robust and numerically advantageous option for MAB algorithms.

Original Abstract

Multi Armed Bandit (MAB) algorithms are a cornerstone of reinforcement learning and have been studied both theoretically and numerically. One of the most commonly used implementation uses a softmax mapping to prescribe the optimal policy and served as the foundation for downstream algorithms, including REINFORCE. Distinct from vanilla approaches, we consider here the L2 regularized softmax policy gradient where a quadratic term is subtracted from the mean reward. Previous studies exploiting convexity failed to identify a suitable theoretical framework to analyze its convergence when the regularization parameter vanishes. We prove here theoretical convergence results and confirm empirically that this regime makes the L2 regularization numerically advantageous on standard benchmarks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.