ArXiv TLDR

DMGD: Train-Free Dataset Distillation with Semantic-Distribution Matching in Diffusion Models

🐦 Tweet
2605.03877

Qichao Wang, Yunhong Lu, Hengyuan Cao, Junyi Zhang, Min Zhang

cs.CVcs.AI

TLDR

DMGD introduces a train-free diffusion-based dataset distillation framework using dual semantic and distribution matching, outperforming SOTA methods.

Key contributions

  • Introduces DMGD, a novel train-free diffusion framework for efficient dataset distillation.
  • Employs Semantic Matching via conditional likelihood, removing auxiliary classifiers.
  • Utilizes Optimal Transport-based Distribution Matching for robust data alignment.
  • Outperforms SOTA fine-tuned methods by up to 5.4% accuracy on ImageNet benchmarks.

Why it matters

This paper addresses the need for more efficient dataset distillation in diffusion models by removing the fine-tuning requirement. Its novel dual matching guidance significantly improves synthetic data quality and diversity. This advancement makes large-scale dataset distillation more practical and accessible, achieving SOTA results without extra training.

Original Abstract

Dataset distillation enables efficient training by distilling the information of large-scale datasets into significantly smaller synthetic datasets. Diffusion based paradigms have emerged in recent years, offering novel perspectives for dataset distillation. However, they typically necessitate additional fine-tuning stages, and effective guidance mechanisms remain underexplored. To address these limitations, we rethink diffusion based dataset distillation and propose a Dual Matching Guided Diffusion (DMGD) framework, centered on efficient training-free guidance. We first establish Semantic Matching via conditional likelihood optimization, eliminating the need for auxiliary classifiers. Furthermore, we propose a dynamic guidance mechanism that enhances the diversity of synthetic data while maintaining semantic alignment. Simultaneously, we introduce an optimal transport (OT) based Distribution Matching approach to further align with the target distribution structure. To ensure efficiency, we develop two enhanced strategies for diffusion based framework: Distribution Approximate Matching and Greedy Progressive Matching. These strategies enable effective distribution matching guidance with minimal computational overhead. Experimental results on ImageNet-Woof, ImageNet-Nette, and ImageNet-1K demonstrate that our training-free approach achieves significant improvements, outperforming state-of-the-art (SOTA) methods requiring additional fine-tuning by average accuracy gains of 2.1%, 5.4%, and 2.4%, respectively.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.