Accelerating Speculative Decoding with Block Diffusion Draft Trees
TLDR
DDTree accelerates speculative decoding by constructing a draft tree from block diffusion models, improving acceptance length and efficiency.
Key contributions
- Introduces DDTree, building a draft tree directly from block diffusion drafter distributions.
- Employs a best-first heap algorithm to select continuations most likely to match the target model.
- Verifies the entire draft tree efficiently in one target model pass using an ancestor-only attention mask.
Why it matters
Vanilla DFlash verifies only one trajectory, limiting acceptance length. DDTree addresses this by building a draft tree, significantly improving speculative decoding efficiency and performance. This advancement makes large language models faster and more practical.
Original Abstract
Speculative decoding accelerates autoregressive language models by using a lightweight drafter to propose multiple future tokens, which the target model then verifies in parallel. DFlash shows that a block diffusion drafter can generate an entire draft block in a single forward pass and achieve state-of-the-art speculative decoding performance, outperforming strong autoregressive drafters such as EAGLE-3. Vanilla DFlash, however, still verifies only a single drafted trajectory per round, potentially limiting its acceptance length. We introduce DDTree (Diffusion Draft Tree), a method that constructs a draft tree directly from the per-position distributions of a block diffusion drafter. Under a fixed node budget, DDTree uses a simple best-first heap algorithm to select the continuations that are most likely to match the target model according to a surrogate defined by the draft model's output. The resulting tree is verified efficiently in a single target model forward pass using an ancestor-only attention mask. Because DDTree builds on DFlash, a leading draft model for speculative decoding, these gains place DDTree among the leading approaches to speculative decoding.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.