Estimating the expected output of wide random MLPs more efficiently than sampling
Wilson Wu, Victor Lecomte, Michael Winer, George Robinson, Jacob Hilton + 1 more
TLDR
This paper introduces a novel method to estimate the expected output of wide random MLPs without sampling, using cumulants and Hermite expansions.
Key contributions
- Estimates expected output of wide random MLPs over Gaussian inputs without network sampling.
- Approximates activation distributions using cumulants and Hermite expansions at each layer.
- Achieves target mean squared error with substantially fewer FLOPs than Monte Carlo sampling.
- Performs well at estimating probabilities of rare events and can be used for model training.
Why it matters
Traditional sampling for expected loss is inefficient. This work offers a significantly faster, non-sampling approach for wide MLPs, especially beneficial for rare event prediction. It could lead to models with reduced catastrophic tail risks.
Original Abstract
By far the most common way to estimate an expected loss in machine learning is to draw samples, compute the loss on each one, and take the empirical average. However, sampling is not necessarily optimal. Given an MLP at initialization, we show how to estimate its expected output over Gaussian inputs without running samples through the network at all. Instead, we produce approximate representations of the distributions of activations at each layer, leveraging tools such as cumulants and Hermite expansions. We show both theoretically and empirically that for sufficiently wide networks, our estimator achieves a target mean squared error using substantially fewer FLOPs than Monte Carlo sampling. We find moreover that our methods perform particularly well at estimating the probabilities of rare events, and additionally demonstrate how they can be used for model training. Together, these findings suggest a path to producing models with a greatly reduced probability of catastrophic tail risks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.