ArXiv TLDR

Stability Enhanced Gaussian Process Variational Autoencoders

🐦 Tweet
2604.09331

Carl R. Richardson, Jichen Zhang, Ethan King, Ján Drgoňa

cs.LGeess.SY

TLDR

SEGP-VAE is introduced to train stable low-dimensional LTI systems from high-dimensional video data, preventing numerical instability.

Key contributions

  • Introduces SEGP-VAE for training low-dimensional LTI systems from high-dimensional video data.
  • SEGP prior integrates LTI system definition for a combined probabilistic and physical model.
  • Restricts LTI parameters to semi-contracting systems for enhanced stability.
  • Enables unconstrained optimization and prevents numerical issues from unstable state matrices.

Why it matters

This paper addresses the challenge of training stable linear time-invariant systems from complex video data. By ensuring system stability through a novel parametrization, it enables robust and interpretable modeling of dynamic processes, crucial for real-world applications.

Original Abstract

A novel stability-enhanced Gaussian process variational autoencoder (SEGP-VAE) is proposed for indirectly training a low-dimensional linear time invariant (LTI) system, using high-dimensional video data. The mean and covariance function of the novel SEGP prior are derived from the definition of an LTI system, enabling the SEGP to capture the indirectly observed latent process using a combined probabilistic and interpretable physical model. The search space of LTI parameters is restricted to the set of semi-contracting systems via a complete and unconstrained parametrisation. As a result, the SEGP-VAE can be trained using unconstrained optimisation algorithms. Furthermore, this parametrisation prevents numerical issues caused by the presence of a non-Hurwitz state matrix. A case study applies SEGP-VAE to a dataset containing videos of spiralling particles. This highlights the benefits of the approach and the application-specific design choices that enabled accurate latent state predictions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.