ArXiv TLDR

Stochastic-Dimension Frozen Sampled Neural Network for High-Dimensional Gross-Pitaevskii Equations on Unbounded Domains

🐦 Tweet
2604.09361

Zhangyong Liang

cs.LG

TLDR

SD-FSNN solves high-dimensional Gross-Pitaevskii equations on unbounded domains with dimension-independent cost and superior accuracy.

Key contributions

  • Achieves dimension-independent computational cost, avoiding exponential growth for high-dimensional GPEs.
  • Utilizes random sampling of weights/biases, enabling faster training and higher accuracy than gradient methods.
  • Preserves GPE structure through a Gaussian ansatz, mass normalization, and energy conservation constraints.

Why it matters

This paper introduces SD-FSNN, a novel neural network approach that efficiently solves complex high-dimensional Gross-Pitaevskii equations. Its dimension-independent cost and superior accuracy overcome limitations of previous methods. This advancement is crucial for simulating quantum systems and other high-dimensional physics problems.

Original Abstract

In this paper, we propose a stochastic-dimension frozen sampled neural network (SD-FSNN) for solving a class of high-dimensional Gross-Pitaevskii equations (GPEs) on unbounded domains. SD-FSNN is unbiased across all dimensions, and its computational cost is independent of the dimension, avoiding the exponential growth in computational and memory costs associated with Hermite-basis discretizations. Additionally, we randomly sample the hidden weights and biases of the neural network, significantly outperforming iterative, gradient-based optimization methods in terms of training time and accuracy. Furthermore, we employ a space-time separation strategy, using adaptive ordinary differential equation (ODE) solvers to update the evolution coefficients and incorporate temporal causality. To preserve the structure of the GPEs, we integrate a Gaussian-weighted ansatz into the neural network to enforce exponential decay at infinity, embed a normalization projection layer for mass normalization, and add an energy conservation constraint to mitigate long-time numerical dissipation. Comparative experiments with existing methods demonstrate the superior performance of SD-FSNN across a range of spatial dimensions and interaction parameters. Compared to existing random-feature methods, SD-FSNN reduces the complexity from linear to dimension-independent. Additionally, SD-FSNN achieves better accuracy and faster training compared to general high-dimensional solvers, while focusing specifically on high-dimensional GPEs on unbounded domains.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.