Theta-regularized Kriging: Modelling and Algorithms
TLDR
Theta-regularized Kriging improves prediction accuracy and stability by penalizing the Gaussian process hyperparameter theta, derived from maximum likelihood.
Key contributions
- Proposed Theta-regularized Kriging model to penalize the Gaussian process hyperparameter theta.
- Derived the optimization problem for Theta-regularized Kriging from a maximum likelihood perspective.
- Implemented with iterative regularized optimization and geometric search cross-validation tuning algorithms.
- Showed superior accuracy and stability compared to other penalized Kriging models on numerical and engineering tasks.
Why it matters
Theta-regularized Kriging improves prediction accuracy and stability by penalizing the Gaussian process hyperparameter. This robust method is crucial for optimizing model parameters in engineering and science, offering superior performance.
Original Abstract
To obtain more accurate model parameters and improve prediction accuracy, we proposed a regularized Kriging model that penalizes the hyperparameter theta in the Gaussian stochastic process, termed the Theta-regularized Kriging. We derived the optimization problem for this model from a maximum likelihood perspective. Additionally, we presented specific implementation details for the iterative process, including the regularized optimization algorithm and the geometric search cross-validation tuning algorithm. Three distinct penalty methods, Lasso, Ridge, and Elastic-net regularization, were meticulously considered. Meanwhile, the proposed Theta-regularized Kriging models were tested on nine common numerical functions and two practical engineering examples. The results demonstrate that, compared with other penalized Kriging models, the proposed model performs better in terms of accuracy and stability.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.