Separating Geometry from Probability in the Analysis of Generalization
Maxim Raginsky, Benjamin Recht
TLDR
This paper proposes a deterministic framework for analyzing generalization in machine learning, separating geometric relationships from probabilistic assumptions.
Key contributions
- Introduces a deterministic framework for generalization using sensitivity analysis of optimization problems.
- Obtains generalization bounds as variational principles, relating in-sample and out-of-sample performance.
- Quantifies generalization error based on the geometric closeness of out-of-sample data to in-sample data.
- Enables ex post statistical characterization to determine when the deterministic error term is small.
Why it matters
Traditional generalization analysis relies on unverifiable probabilistic assumptions. This paper introduces a deterministic framework, providing a robust foundation by separating geometry from probability. This novel approach offers a new perspective on generalization error, potentially leading to more verifiable and practical guarantees.
Original Abstract
The goal of machine learning is to find models that minimize prediction error on data that has not yet been seen. Its operational paradigm assumes access to a dataset $S$ and articulates a scheme for evaluating how well a given model performs on an arbitrary sample. The sample can be $S$ (in which case we speak of ``in-sample'' performance) or some entirely new $S'$ (in which case we speak of ``out-of-sample'' performance). Traditional analysis of generalization assumes that both in- and out-of-sample data are i.i.d.\ draws from an infinite population. However, these probabilistic assumptions cannot be verified even in principle. This paper presents an alternative view of generalization through the lens of sensitivity analysis of solutions of optimization problems to perturbations in the problem data. Under this framework, generalization bounds are obtained by purely deterministic means and take the form of variational principles that relate in-sample and out-of-sample evaluations through an error term that quantifies how close out-of-sample data are to in-sample data. Statistical assumptions can then be used \textit{ex post} to characterize the situations when this error term is small (either on average or with high probability).
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.