Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones + 3 more
TLDR
The paper introduces the Transformer, a novel neural network architecture based solely on attention mechanisms that outperforms traditional recurrent and convolutional models in sequence transduction tasks like machine translation.
Key contributions
- Proposes the Transformer architecture that eliminates recurrence and convolutions, relying entirely on attention mechanisms.
- Demonstrates superior translation quality and faster training times compared to state-of-the-art models on WMT 2014 English-to-German and English-to-French tasks.
- Shows the Transformer’s versatility by successfully applying it to English constituency parsing with both large and limited datasets.
Why it matters
This paper matters because it fundamentally changes how sequence modeling is approached by removing the need for recurrent or convolutional layers, enabling more parallelizable and efficient training. The Transformer’s superior performance and generalizability have since driven significant advances in natural language processing and beyond, influencing a wide range of applications and research directions.
Original Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.