ArXiv TLDR

Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation

🐦 Tweet
2604.24763

Zhiheng Liu, Weiming Ren, Xiaoke Huang, Shoufa Chen, Tianhong Li + 10 more

cs.CV

TLDR

Tuna-2 is a unified multimodal model using pixel embeddings for understanding and generation, outperforming vision encoders and simplifying architecture.

Key contributions

  • Introduces Tuna-2, a unified multimodal model based on native pixel embeddings for understanding and generation.
  • Simplifies architecture by discarding traditional vision encoders (e.g., VAEs), using simple patch embedding layers.
  • Achieves state-of-the-art performance in multimodal benchmarks for high-quality image generation.
  • Encoder-free design offers stronger multimodal understanding at scale, particularly for fine-grained perception.

Why it matters

This paper demonstrates that pretrained vision encoders are not essential for multimodal models. End-to-end pixel-space learning offers a scalable and simpler path to superior visual representations for both generation and perception tasks. This could lead to more efficient and powerful multimodal AI.

Original Abstract

Unified multimodal models typically rely on pretrained vision encoders and use separate visual representations for understanding and generation, creating misalignment between the two tasks and preventing fully end-to-end optimization from raw pixels. We introduce Tuna-2, a native unified multimodal model that performs visual understanding and generation directly based on pixel embeddings. Tuna-2 drastically simplifies the model architecture by employing simple patch embedding layers to encode visual input, completely discarding the modular vision encoder designs such as the VAE or the representation encoder. Experiments show that Tuna-2 achieves state-of-the-art performance in multimodal benchmarks, demonstrating that unified pixel-space modelling can fully compete with latent-space approaches for high-quality image generation. Moreover, while the encoder-based variant converges faster in early pretraining, Tuna-2's encoder-free design achieves stronger multimodal understanding at scale, particularly on tasks requiring fine-grained visual perception. These results show that pretrained vision encoders are not necessary for multimodal modelling, and end-to-end pixel-space learning offers a scalable path toward stronger visual representations for both generation and perception.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.