ParamBoost: Gradient Boosted Piecewise Cubic Polynomials
TLDR
ParamBoost is a novel GAM using gradient boosting with cubic polynomials and expert constraints for interpretable, high-performing models.
Key contributions
- Introduces ParamBoost, a novel GAM using gradient-boosted piecewise cubic polynomials.
- Integrates expert knowledge through constraints: continuity (C2), monotonicity, and convexity.
- Achieves state-of-the-art performance, outperforming other GAMs in unconstrained settings.
- Enables selective constraint application, tailoring interpretability with modest performance trade-offs.
Why it matters
ParamBoost addresses a key limitation of traditional GAMs by allowing the incorporation of expert knowledge through various constraints. This leads to more robust, interpretable models that can be tailored to specific application needs. It also achieves superior predictive performance.
Original Abstract
Generalized Additive Models (GAMs) can be used to create non-linear glass-box (i.e. explicitly interpretable) models, where the predictive function is fully observable over the complete input space. However, glass-box interpretability itself does not allow for the incorporation of expert knowledge from the modeller. In this paper, we present ParamBoost, a novel GAM whose shape functions (i.e. mappings from individual input features to the output) are learnt using a Gradient Boosting algorithm that fits cubic polynomial functions at leaf nodes. ParamBoost incorporates several constraints commonly used in parametric analysis to ensure well-refined shape functions. These constraints include: (i) continuity of the shape functions and their derivatives (up to C2); (ii) monotonicity; (iii) convexity; (iv) feature interaction constraints; and (v) model specification constraints. Empirical results show that the unconstrained ParamBoost model consistently outperforms state-of-the-art GAMs across several real-world datasets. We further demonstrate that modellers can selectively impose required constraints at a modest trade-off in predictive performance, allowing the model to be fully tailored to application-specific interpretability and parametric-analysis requirements.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.