ArXiv TLDR

Di-BiLPS: Denoising induced Bidirectional Latent-PDE-Solver under Sparse Observations

🐦 Tweet
2605.13790

Zhonghao Li, Chaoyu Liu, Qian Zhang

cs.LGcs.AI

TLDR

Di-BiLPS is a neural framework that solves PDEs efficiently under extremely sparse data, outperforming SOTA with zero-shot super-resolution.

Key contributions

  • Unified neural framework (Di-BiLPS) for forward and inverse PDE problems with extremely sparse observations.
  • Combines VAE, latent diffusion, and contrastive learning for efficient inference in a compact latent space.
  • Introduces a PDE-informed denoising algorithm, improving inference efficiency and accuracy significantly.
  • Achieves SOTA performance on multiple PDE benchmarks with as little as 3% data, enabling zero-shot super-resolution.

Why it matters

This paper addresses a critical limitation in PDE solving: handling extremely sparse data, which is common in real-world scenarios. Di-BiLPS offers a robust and efficient solution, significantly advancing the applicability of neural PDE solvers. Its ability to perform zero-shot super-resolution further expands its utility.

Original Abstract

Partial differential equations (PDEs) are fundamental for modeling complex natural and physical phenomena. In many real-world applications, however, observational data are extremely sparse, which severely limits the applicability of both classical numerical solvers and existing neural approaches. While neural methods have shown promising results under moderately sparse observations, their inference efficiency at high resolutions is limited, and their accuracy degrades substantially in the extremely sparse regime. In this work, we propose the Di-BiLPS, a unified neural framework that effectively handle both forward and inverse PDE problems under extremely sparse observations. Di-BiLPS combines a variational autoencoder to compress high-dimensional inputs into a compact latent space, a latent diffusion module to model uncertainty, and contrastive learning to align representations. Operating entirely in this latent space, the framework achieves efficient inference while retaining flexible input-output mapping. In addition, we introduce a PDE-informed denoising algorithm based on a variance-preserving diffusion process, which further improves inference efficiency. Extensive experiments on multiple PDE benchmarks demonstrate that Di-BiLPS consistently achieves SOTA performance under extremely sparse inputs (as low as 3%), while substantially reducing computational cost. Moreover, Di-BiLPS enables zero-shot super-resolution, as it allows predictions over continuous spatial-temporal domains.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.