ArXiv TLDR

Generative Refinement Networks for Visual Synthesis

🐦 Tweet
2604.13030

Jian Han, Jinlai Liu, Jiahuan Wang, Bingyue Peng, Zehuan Yuan

cs.CV

TLDR

Generative Refinement Networks (GRN) overcome diffusion and AR model limitations with near-lossless quantization and global refinement for efficient visual synthesis.

Key contributions

  • Introduces Generative Refinement Networks (GRN) for efficient, high-quality visual synthesis.
  • Develops Hierarchical Binary Quantization (HBQ) for near-lossless discrete tokenization.
  • Implements a global refinement mechanism for progressive artwork perfection and correction.
  • Achieves state-of-the-art results on ImageNet, text-to-image, and text-to-video generation.

Why it matters

GRN offers a new paradigm for visual synthesis, combining the efficiency of AR models with the quality of continuous representations. Its novel quantization and refinement mechanisms set new benchmarks, paving the way for more adaptive and high-fidelity generative AI.

Original Abstract

While diffusion models dominate the field of visual generation, they are computationally inefficient, applying a uniform computational effort regardless of different complexity. In contrast, autoregressive (AR) models are inherently complexity-aware, as evidenced by their variable likelihoods, but are often hindered by lossy discrete tokenization and error accumulation. In this work, we introduce Generative Refinement Networks (GRN), a next-generation visual synthesis paradigm to address these issues. At its core, GRN addresses the discrete tokenization bottleneck through a theoretically near-lossless Hierarchical Binary Quantization (HBQ), achieving a reconstruction quality comparable to continuous counterparts. Built upon HBQ's latent space, GRN fundamentally upgrades AR generation with a global refinement mechanism that progressively perfects and corrects artworks -- like a human artist painting. Besides, GRN integrates an entropy-guided sampling strategy, enabling complexity-aware, adaptive-step generation without compromising visual quality. On the ImageNet benchmark, GRN establishes new records in image reconstruction (0.56 rFID) and class-conditional image generation (1.81 gFID). We also scale GRN to more challenging text-to-image and text-to-video generation, delivering superior performance on an equivalent scale. We release all models and code to foster further research on GRN.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.