ArXiv TLDR

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

🐦 Tweet
2010.11929

Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai + 7 more

cs.CVcs.AIcs.LG

TLDR

This paper demonstrates that a pure Transformer model applied directly to image patches can achieve state-of-the-art image classification performance without relying on convolutional networks.

Key contributions

  • Introduces Vision Transformer (ViT), which treats images as sequences of patches for direct Transformer application.
  • Shows ViT achieves excellent results on multiple image recognition benchmarks after large-scale pre-training.
  • Demonstrates ViT requires significantly less computational resources to train compared to state-of-the-art CNNs.

Why it matters

This work challenges the conventional reliance on convolutional neural networks for vision tasks by proving that Transformers alone, when properly scaled and pre-trained, can match or surpass CNN performance. This opens new avenues for leveraging advances in NLP architectures and large-scale pre-training in computer vision, potentially simplifying model design and improving efficiency.

Original Abstract

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.