ArXiv TLDR

The Harder Path: Last Iterate Convergence for Uncoupled Learning in Zero-Sum Games with Bandit Feedback

🐦 Tweet
2604.16087

Côme Fiegel, Pierre Ménard, Tadashi Kozuno, Michal Valko, Vianney Perchet

cs.LGstat.ML

TLDR

New algorithms achieve optimal last-iterate convergence rates for uncoupled learning in zero-sum games with bandit feedback, despite inherent challenges.

Key contributions

  • Investigates uncoupled last-iterate convergence in zero-sum games with bandit feedback.
  • Demonstrates that last-iterate convergence for uncoupled algorithms is inherently slower (Ω(T^-1/4)).
  • Proposes two novel algorithms achieving this optimal Ω(T^-1/4) rate for last-iterate convergence.

Why it matters

Learning in zero-sum games with bandit feedback and uncoupled players is challenging, especially for last-iterate convergence. This paper reveals an inherent performance trade-off, establishing a new optimal rate. The proposed algorithms provide practical methods to achieve this rate, advancing the field of multi-agent learning.

Original Abstract

We study the problem of learning in zero-sum matrix games with repeated play and bandit feedback. Specifically, we focus on developing uncoupled algorithms that guarantee, without communication between players, the convergence of the last-iterate to a Nash equilibrium. Although the non-bandit case has been studied extensively, this setting has only been explored recently, with a bound of $\mathcal{O}(T^{-1/8})$ on the exploitability gap. We show that, for uncoupled algorithms, guaranteeing convergence of the policy profiles to a Nash equilibrium is detrimental to the performance, with the best attainable rate being $Ω(T^{-1/4})$ in contrast to the usual $Ω(T^{-1/2})$ rate for convergence of the average iterates. We then propose two algorithms that achieve this optimal rate up to constant and logarithmic factors. The first algorithm leverages a straightforward trade-off between exploration and exploitation, while the second employs a regularization technique based on a two-step mirror descent approach.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.