ArXiv TLDR

Meta Additive Model: Interpretable Sparse Learning With Auto Weighting

🐦 Tweet
2604.20111

Xuelin Zhang, Xinyue Liu, Lingjuan Wu, Hong Chen

cs.LGcs.AIstat.ML

TLDR

MAM is a novel meta additive model that automatically learns data-driven loss weighting to improve sparse learning robustness and interpretability.

Key contributions

  • Introduces Meta Additive Model (MAM) for robust and interpretable sparse learning.
  • Uses bilevel optimization to learn data-driven loss weighting via an MLP on meta data.
  • Handles complex noise, outliers, and imbalanced data, outperforming prior additive models.
  • Provides theoretical guarantees for convergence, generalization, and variable selection consistency.

Why it matters

Existing sparse additive models struggle with complex noise and manual tuning. The Meta Additive Model (MAM) automatically learns data-driven loss weights, boosting robustness and interpretability in high-dimensional data analysis.

Original Abstract

Sparse additive models have attracted much attention in high-dimensional data analysis due to their flexible representation and strong interpretability. However, most existing models are limited to single-level learning under the mean-squared error criterion, whose empirical performance can degrade significantly in the presence of complex noise, such as non-Gaussian perturbations, outliers, noisy labels, and imbalanced categories. The sample reweighting strategy is widely used to reduce the model's sensitivity to atypical data; however, it typically requires prespecifying the weighting functions and manually selecting additional hyperparameters. To address this issue, we propose a new meta additive model (MAM) based on the bilevel optimization framework, which learns data-driven weighting of individual losses by parameterizing the weighting function via an MLP trained on meta data. MAM is capable of a variety of learning tasks, including variable selection, robust regression estimation, and imbalanced classification. Theoretically, MAM provides guarantees on convergence in computation, algorithmic generalization, and variable selection consistency under mild conditions. Empirically, MAM outperforms several state-of-the-art additive models on both synthetic and real-world data under various data corruptions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.