Relaxation-Informed Training of Neural Network Surrogate Models
TLDR
This paper introduces novel training regularizers for ReLU neural networks to significantly improve the tractability of their embedded mixed-integer linear programs.
Key contributions
- Proposes bound-based regularizers to penalize big-M constants and unstable neurons in MILP formulations.
- Introduces an LP relaxation gap regularizer that explicitly penalizes the continuous relaxation gap during training.
- Derives the gradient for the LP gap regularizer using LP dual variables, avoiding custom auto-diff.
- Shows combined regularizers reduce MILP solve times by up to four orders of magnitude.
Why it matters
This work addresses a critical bottleneck in using neural networks for global optimization: the computational cost of solving the resulting MILPs. By directly optimizing for MILP tractability during training, it enables much faster and more scalable global optimization, broadening the applicability of neural network surrogates in complex decision-making problems.
Original Abstract
ReLU neural networks trained as surrogate models can be embedded exactly in mixed-integer linear programs (MILPs), enabling global optimization over the learned function. The tractability of the resulting MILP depends on structural properties of the network, i.e., the number of binary variables in associated formulations and the tightness of the continuous LP relaxation. These properties are determined during training, yet standard training objectives (prediction loss with classical weight regularization) offer no mechanism to directly control them. This work studies training regularizers that directly target downstream MILP tractability. Specifically, we propose simple bound-based regularizers that penalize the big-M constants of MILP formulations and/or the number of unstable neurons. Moreover, we introduce an LP relaxation gap regularizer that explicitly penalizes the per-sample gap of the continuous relaxation at training points. We derive its associated gradient and provide an implementation from LP dual variables without custom automatic differentiation tools. We show that combining the above regularizers can approximate the full total derivative of the LP gap with respect to the network parameters, capturing both direct and indirect sensitivities. Experiments on non-convex benchmark functions and a two-stage stochastic programming problem with quantile neural network surrogates demonstrate that the proposed regularizers can reduce MILP solve times by up to four orders of magnitude relative to an unregularized baseline, while maintaining competitive surrogate model accuracy.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.