How to Train Your Latent Diffusion Language Model Jointly With the Latent Space
Viacheslav Meshchaninov, Alexander Shabalin, Egor Chimbulatov, Nikita Gushchin, Ilya Koziev + 2 more
TLDR
LDLM jointly trains latent space and diffusion model for faster, higher-quality non-autoregressive text generation.
Key contributions
- Introduces LDLM that jointly trains latent encoder, diffusion model, and decoder.
- Proposes training recipe with MSE loss, warmup, adaptive sampling, and decoder noise.
- Achieves better generation quality and 2-13x speedup over prior diffusion models.
- Validates approach on OpenWebText and LM1B benchmarks with strong ablation studies.
Why it matters
This paper shows that jointly learning latent space and diffusion improves text generation speed and quality. It advances non-autoregressive models, making latent diffusion more practical and competitive.
Original Abstract
Latent diffusion models offer an attractive alternative to discrete diffusion for non-autoregressive text generation by operating on continuous text representations and denoising entire sequences in parallel. The major challenge in latent diffusion modeling is constructing a suitable latent space. In this work, we present the Latent Diffusion Language Model (LDLM), in which the latent encoder, diffusion model, and decoder are trained jointly. LDLM builds its latent space by reshaping the representations of a pre-trained language model with a trainable encoder, yielding latents that are easy to both denoise and decode into tokens. We show that naive joint training produces a low-quality diffusion model, and propose a simple training recipe consisting of an MSE decoder loss, diffusion-to-encoder warmup, adaptive timestep sampling, and decoder-input noise. Ablations show that each component substantially impacts generation performance. On OpenWebText and LM1B, LDLM achieves better generation performance than existing discrete and continuous diffusion language models while being $2{\text -}13\times$ faster, indicating that jointly learning the latent space is a key step toward making latent diffusion competitive for text generation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.