ArXiv TLDR

Learning to Emulate Chaos: Adversarial Optimal Transport Regularization

🐦 Tweet
2604.21097

Gabriel Melo, Leonardo Santiago, Peter Y. Lu

stat.MLcs.LG

TLDR

This paper introduces adversarial optimal transport regularization to train emulators for chaotic systems, significantly improving long-term statistical fidelity.

Key contributions

  • Proposes adversarial optimal transport objectives for training chaotic system emulators.
  • Jointly learns high-quality summary statistics and physically consistent emulators.
  • Analyzes Sinkhorn divergence (2-Wasserstein) and WGAN-style dual (1-Wasserstein) formulations.
  • Achieves significantly improved long-term statistical fidelity across diverse chaotic systems.

Why it matters

Accurately modeling chaotic systems is crucial for fields like weather forecasting and power grids. This work provides a robust method to train data-driven emulators, overcoming limitations of traditional loss functions and improving long-term predictions.

Original Abstract

Chaos arises in many complex dynamical systems, from weather to power grids, but is difficult to accurately model using data-driven emulators, including neural operator architectures. For chaotic systems, the inherent sensitivity to initial conditions makes exact long-term forecasts theoretically infeasible, meaning that traditional squared-error losses often fail when trained on noisy data. Recent work has focused on training emulators to match the statistical properties of chaotic attractors by introducing regularization based on handcrafted local features and summary statistics, as well as learned statistics extracted from a diverse dataset of trajectories. In this work, we propose a family of adversarial optimal transport objectives that jointly learn high-quality summary statistics and a physically consistent emulator. We theoretically analyze and experimentally validate a Sinkhorn divergence formulation (2-Wasserstein) and a WGAN-style dual formulation (1-Wasserstein). Our experiments across a variety of chaotic systems, including systems with high-dimensional chaotic attractors, show that emulators trained with our approach exhibit significantly improved long-term statistical fidelity.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.