Sparkle: Realizing Lively Instruction-Guided Video Background Replacement via Decoupled Guidance
Ziyun Zeng, Yiqi Lin, Guoqiang Liang, Mike Zheng Shou
TLDR
Sparkle introduces a new dataset and benchmark for high-quality video background replacement, significantly improving model performance.
Key contributions
- Developed a scalable pipeline for decoupled foreground/background guidance generation.
- Introduced Sparkle, a ~140K video dataset for instruction-guided background replacement.
- Created Sparkle-Bench, the largest evaluation benchmark for video background replacement.
- Achieved substantial performance gains over baselines with a model trained on Sparkle.
Why it matters
Current video background replacement models produce static, unnatural results due to data scarcity. Sparkle introduces a high-quality dataset and benchmark, enabling significantly more realistic instruction-guided video editing for creative applications.
Original Abstract
In recent years, open-source efforts like Senorita-2M have propelled video editing toward natural language instruction. However, current publicly available datasets predominantly focus on local editing or style transfer, which largely preserve the original scene structure and are easier to scale. In contrast, Background Replacement, a task central to creative applications such as film production and advertising, requires synthesizing entirely new, temporally consistent scenes while maintaining accurate foreground-background interactions, making large-scale data generation significantly more challenging. Consequently, this complex task remains largely underexplored due to a scarcity of high-quality training data. This gap is evident in poorly performing state-of-the-art models, e.g., Kiwi-Edit, because the primary open-source dataset that contains this task, i.e., OpenVE-3M, frequently produces static, unnatural backgrounds. In this paper, we trace this quality degradation to a lack of precise background guidance during data synthesis. Accordingly, we design a scalable pipeline that generates foreground and background guidance in a decoupled manner with strict quality filtering. Building on this pipeline, we introduce Sparkle, a dataset of ~140K video pairs spanning five common background-change themes, alongside Sparkle-Bench, the largest evaluation benchmark tailored for background replacement to date. Experiments demonstrate that our dataset and the model trained on it achieve substantially better performance than all existing baselines on both OpenVE-Bench and Sparkle-Bench. Our proposed dataset, benchmark, and model are fully open-sourced at https://showlab.github.io/Sparkle/.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.