ArXiv TLDR

An Optimal Sauer Lemma Over $k$-ary Alphabets

🐦 Tweet
2604.12952

Steve Hanneke, Qinglin Meng, Shay Moran, Amirreza Shaeiri

cs.LGmath.COstat.ML

TLDR

This paper establishes a sharp Sauer inequality for multiclass and list prediction using the DS dimension, providing optimal bounds for k-ary alphabets.

Key contributions

  • Establishes a sharp Sauer inequality for multiclass and list prediction.
  • Uses Daniely-Shalev-Shwartz (DS) dimension, not Natarajan, for optimal bounds.
  • Bound is tight for all alphabet sizes, list sizes, and dimension values.
  • Improves sample complexity for list PAC learning and uniform convergence.

Why it matters

This work provides fundamental improvements to Sauer-type bounds in learning theory for multiclass and list prediction. By using the DS dimension, it achieves optimal dependence on alphabet and list sizes, leading to sharper sample complexity results. This is crucial for advancing theoretical understanding and practical efficiency of learning algorithms.

Original Abstract

The Sauer-Shelah-Perles Lemma is a cornerstone of combinatorics and learning theory, bounding the size of a binary hypothesis class in terms of its Vapnik-Chervonenkis (VC) dimension. For classes of functions over a $k$-ary alphabet, namely the multiclass setting, the Natarajan dimension has long served as an analogue of VC dimension, yet the corresponding Sauer-type bounds are suboptimal for alphabet sizes $k>2$. In this work, we establish a sharp Sauer inequality for multiclass and list prediction. Our bound is expressed in terms of the Daniely--Shalev-Shwartz (DS) dimension, and more generally with its extension, the list-DS dimension -- the combinatorial parameters that characterize multiclass and list PAC learnability. Our bound is tight for every alphabet size $k$, list size $\ell$, and dimension value, replacing the exponential dependence on $\ell$ in the Natarajan-based bound by the optimal polynomial dependence, and improving the dependence on $k$ as well. Our proof uses the polynomial method. In contrast to the classical VC case, where several direct combinatorial proofs are known, we are not aware of any purely combinatorial proof in the DS setting. This motivates several directions for future research, which are discussed in the paper. As consequences, we obtain improved sample complexity upper bounds for list PAC learning and for uniform convergence of list predictors, sharpening the recent results of Charikar et al.~(STOC~2023), Hanneke et al.~(COLT~2024), and Brukhim et al.~(NeurIPS~2024).

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.