Learning Multimodal Energy-Based Model with Multimodal Variational Auto-Encoder via MCMC Revision
Jiali Cui, Zhiqiang Lao, Heather Yu
TLDR
This paper introduces a framework for learning multimodal Energy-Based Models by combining VAEs with MCMC revisions for improved sampling and coherence.
Key contributions
- Tackles challenges of learning multimodal EBMs with MCMC and VAEs' unimodal approximations.
- Introduces a framework combining MLE updates with MCMC revisions in data and latent spaces.
- Generator provides strong initial states for EBM sampling; inference model aids latent posterior sampling.
- Demonstrates superior multimodal synthesis quality and coherence over existing baselines.
Why it matters
This paper significantly advances multimodal generative modeling by overcoming the limitations of traditional MCMC sampling in EBMs and the restrictive assumptions of VAEs. It enables the creation of more realistic and coherent multimodal data, which is crucial for complex AI applications.
Original Abstract
Energy-based models (EBMs) are a flexible class of deep generative models and are well-suited to capture complex dependencies in multimodal data. However, learning multimodal EBM by maximum likelihood requires Markov Chain Monte Carlo (MCMC) sampling in the joint data space, where noise-initialized Langevin dynamics often mixes poorly and fails to discover coherent inter-modal relationships. Multimodal VAEs have made progress in capturing such inter-modal dependencies by introducing a shared latent generator and a joint inference model. However, both the shared latent generator and joint inference model are parameterized as unimodal Gaussian (or Laplace), which severely limits their ability to approximate the complex structure induced by multimodal data. In this work, we study the learning problem of the multimodal EBM, shared latent generator, and joint inference model. We present a learning framework that effectively interweaves their MLE updates with corresponding MCMC refinements in both the data and latent spaces. Specifically, the generator is learned to produce coherent multimodal samples that serve as strong initial states for EBM sampling, while the inference model is learned to provide informative latent initializations for generator posterior sampling. Together, these two models serve as complementary models that enable effective EBM sampling and learning, yielding realistic and coherent multimodal EBM samples. Extensive experiments demonstrate superior performance for multimodal synthesis quality and coherence compared to various baselines. We conduct various analyses and ablation studies to validate the effectiveness and scalability of the proposed multimodal framework.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.