Meta-Learned Basis Adaptation for Parametric Linear PDEs
Vikas Dwivedi, Monica Sigovan, Bruno Sixou
TLDR
KAPI is a hybrid meta-learned framework that uses an adaptive Gaussian basis and a least-squares corrector to efficiently solve parametric linear PDEs.
Key contributions
- Proposes KAPI, a hybrid meta-learned framework for parametric linear PDEs with an adaptive basis and corrector.
- KAPI's meta-network learns to adaptively generate Gaussian basis geometry (centers, widths) based on PDE parameters.
- A second-stage corrector augments the predictor's basis and uses a one-shot PIELM-style solve for high accuracy.
- Outperforms parametric PINNs, DeepONet, and uniform-grid PIELM, achieving significant accuracy improvements.
Why it matters
This paper introduces an interpretable and efficient method for solving families of parametric linear PDEs. By adapting the approximation space through a meta-learned basis, it captures complex physics and significantly improves solution accuracy. This approach offers a valuable alternative to existing neural network-based PDE solvers.
Original Abstract
We propose a hybrid physics-informed framework for solving families of parametric linear partial differential equations (PDEs) by combining a meta-learned predictor with a least-squares corrector. The predictor, termed \textbf{KAPI} (Kernel-Adaptive Physics-Informed meta-learner), is a shallow task-conditioned model that maps query coordinates and PDE parameters to solution values while internally generating an interpretable, task-adaptive Gaussian basis geometry. A lightweight meta-network maps PDE parameters to basis centers, widths, and activity patterns, thereby learning how the approximation space should adapt across the parametric family. This predictor-generated geometry is transferred to a second-stage corrector, which augments it with a background basis and computes the final solution through a one-shot physics-informed Extreme Learning Machine (PIELM)-style least-squares solve. We evaluate the method on four linear PDE families spanning diffusion, transport, mixed advection--diffusion, and variable-speed transport. Across these cases, the predictor captures meaningful physics through localized and transport-aligned basis placement, while the corrector further improves accuracy, often by one or more orders of magnitude. Comparisons with parametric PINNs, physics-informed DeepONet, and uniform-grid PIELM correctors highlight the value of predictor-guided basis adaptation as an interpretable and efficient strategy for parametric PDE solving.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.