VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenization
Andrei Atanov, Jesse Allardice, Roman Bachmann, Oğuzhan Fatih Kar, R Devon Hjelm + 4 more
TLDR
VideoFlexTok introduces a flexible, coarse-to-fine video tokenization, enabling efficient long video generation and smaller models for generative tasks.
Key contributions
- Introduces VideoFlexTok, a coarse-to-fine video tokenization with variable-length token sequences.
- Emergently captures abstract information (semantics, motion) first, then adds fine-grained details.
- Enables efficient training, achieving comparable generation quality with 5x smaller models.
- Facilitates long video generation using 8x fewer tokens than 3D grid methods, reducing computational cost.
Why it matters
Traditional video tokenization is inefficient for generative models. VideoFlexTok offers a novel approach that significantly reduces computational demands and model size, making long video generation more feasible. This improves accessibility and scalability for video AI research.
Original Abstract
Visual tokenizers map high-dimensional raw pixels into a compressed representation for downstream modeling. Beyond compression, tokenizers dictate what information is preserved and how it is organized. A de facto standard approach to video tokenization is to represent a video as a spatiotemporal 3D grid of tokens, each capturing the corresponding local information in the original signal. This requires the downstream model that consumes the tokens, e.g., a text-to-video model, to learn to predict all low-level details "pixel-by-pixel" irrespective of the video's inherent complexity, leading to high learning complexity. We present VideoFlexTok, which represents videos with a variable-length sequence of tokens structured in a coarse-to-fine manner -- where the first tokens (emergently) capture abstract information, such as semantics and motion, and later tokens add fine-grained details. The generative flow decoder enables realistic video reconstructions from any token count. This representation structure allows adapting the token count according to downstream needs and encoding videos longer than the baselines with the same budget. We evaluate VideoFlexTok on class- and text-to-video generative tasks and show that it leads to more efficient training compared to 3D grid tokens, e.g., achieving comparable generation quality (gFVD and ViCLIP Score) with a 5x smaller model (1.1B vs 5.2B). Finally, we demonstrate how VideoFlexTok can enable long video generation without prohibitive computational cost by training a text-to-video model on 10-second 81-frame videos with only 672 tokens, 8x fewer than a comparable 3D grid tokenizer.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.