ArXiv TLDR

Mutual Forcing: Dual-Mode Self-Evolution for Fast Autoregressive Audio-Video Character Generation

🐦 Tweet
2604.25819

Yupeng Zhou, Lianghua Huang, Zhifan Wu, Jiabao Wang, Yupeng Shi + 5 more

cs.CVcs.SD

TLDR

Mutual Forcing enables fast, high-quality autoregressive audio-video character generation by integrating dual-mode self-evolution for improved efficiency.

Key contributions

  • Introduces Mutual Forcing for fast, synchronized autoregressive audio-video character generation.
  • Employs a dual-mode (few-step & multi-step) self-evolution within a single weight-shared model.
  • Enables self-distillation and improves training-inference consistency without a teacher model.
  • Achieves high quality with 4-8 sampling steps, significantly outperforming 50-step baselines.

Why it matters

This paper offers a novel approach to fast, high-quality audio-video generation, directly addressing limitations of existing distillation pipelines. By removing the need for a separate teacher model and enabling self-improvement, it significantly reduces training complexity and inference steps. This advancement makes real-time, synchronized character generation more efficient and accessible.

Original Abstract

In this work, we propose Mutual Forcing, a framework for fast autoregressive audio-video generation with long-horizon audio-video synchronization. Our approach addresses two key challenges: joint audio-video modeling and fast autoregressive generation. To ease joint audio-video optimization, we adopt a two-stage training strategy: we first train uni-modal generators and then couple them into a unified audio-video model for joint training on paired data. For streaming generation, we ask whether a native fast causal audio-video model can be trained directly, instead of following existing streaming distillation pipelines that typically train a bidirectional model first and then convert it into a causal generator through multiple distillation stages. Our answer is Mutual Forcing, which builds directly on native autoregressive model and integrates few-step and multi-step generation within a single weight-shared model, enabling self-distillation and improved training-inference consistency. The multi-step mode improves the few-step mode via self-distillation, while the few-step mode generates historical context during training to improve training-inference consistency; because the two modes share parameters, these two effects reinforce each other within a single model. Compared with prior approaches such as Self-Forcing, Mutual Forcing removes the need for an additional bidirectional teacher model, supports more flexible training sequence lengths, reduces training overhead, and allows the model to improve directly from real paired data rather than a fixed teacher. Experiments show that Mutual Forcing matches or surpasses strong baselines that require around 50 sampling steps while using only 4 to 8 steps, demonstrating substantial advantages in both efficiency and quality. The project page is available at https://mutualforcing.github.io.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.