ReCap: Lightweight Referential Grounding for Coherent Story Visualization
Aditya Arora, Akshita Gupta, Pau Rodriguez, Marcus Rohrbach
TLDR
ReCap is a lightweight framework improving character consistency in story visualization by selectively conditioning on previous frames, outperforming SOTA.
Key contributions
- Introduces ReCap, a lightweight framework for consistent story visualization without modifying the diffusion backbone.
- CORE module selectively conditions on previous frames for pronoun-referred characters, adding only 149K parameters.
- SemDrift corrects semantic identity drift during training by aligning with DINOv3 embeddings, with zero inference cost.
- Achieves new state-of-the-art character consistency on FlintstonesSV and PororoSV benchmarks.
Why it matters
This paper tackles the critical challenge of maintaining visual consistency in story visualization without substantial computational overhead. ReCap's lightweight approach significantly improves character identity and visual fidelity, making high-quality, coherent narrative generation more efficient and practical for broader applications.
Original Abstract
Story Visualization aims to generate a sequence of images that faithfully depicts a textual narrative that preserve character identity, spatial configuration, and stylistic coherence as the narratives unfold. Maintaining such cross-frame consistency has traditionally relied on explicit memory banks, architectural expansion, or auxiliary language models, resulting in substantial parameter growth and inference overhead. We introduce ReCap, a lightweight consistency framework that improves character stability and visual fidelity without modifying the base diffusion backbone. ReCap's CORE (COnditional frame REferencing) module treats anaphors, in our case pronouns, as visual anchors, activating only when characters are referred to by a pronoun and conditioning on the preceding frame to propagate visual identity. This selective design avoids unconditional cross-frame conditioning and introduces only 149K additional parameters, a fraction of the cost of memory-bank and LLM-augmented approaches. To further stabilize identity, we incorporate SemDrift (Guided Semantic Drift Correction) applied only during training. When text is vague or referential, the denoiser lacks a visual anchor for identity-defining attributes, causing character appearance to drift across frames, SemDrift corrects this by aligning denoiser representations with pretrained DINOv3 visual embeddings, enforcing semantic identity stability at zero inference cost. ReCap outperforms previous state-of-the-art, StoryGPT-V, on the two main benchmarks for story visualization by 2.63% Character-Accuracy on FlintstonesSV and by 5.65% on PororoSV, establishing a new state-of-the-art character consistency on both benchmarks. Furthermore, we extend story visualization to human-centric narratives derived from real films, demonstrating the capability of ReCap beyond stylized cartoon domains.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.