ArXiv TLDR

Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

🐦 Tweet
2604.26951

Gongbo Zhang, Wen Wang, Ye Tian, Li Yuan

cs.CLcs.AIcs.LG

TLDR

TIDE is the first framework for cross-architecture distillation in dLLMs, allowing small student models to learn from diverse large teacher architectures.

Key contributions

  • TIDAL: Modulates distillation strength based on training progress and diffusion timestep for teacher reliability.
  • CompDemo: Enriches teacher context via complementary mask splitting to improve predictions under heavy masking.
  • Reverse CALM: A cross-tokenizer objective that inverts chunk-level likelihood matching for stable gradients.
  • Achieves 1.53 points average gain over baselines, significantly boosting code generation (HumanEval 48.78).

Why it matters

State-of-the-art dLLMs require billions of parameters, but existing distillation methods are limited to single architectures. TIDE enables cross-architecture knowledge transfer, creating much smaller, efficient dLLMs that retain high performance. This makes advanced dLLMs more accessible and deployable.

Original Abstract

Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.