BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
TLDR
BERT is a deep bidirectional transformer model pre-trained on unlabeled text that achieves state-of-the-art results across various natural language processing tasks with minimal fine-tuning.
Key contributions
- Introduces a novel bidirectional pre-training approach that conditions on both left and right context simultaneously.
- Enables fine-tuning with just one additional output layer, simplifying adaptation to diverse NLP tasks.
- Achieves significant performance improvements on benchmarks like GLUE, MultiNLI, and SQuAD.
Why it matters
This paper matters because it presents a fundamentally new way to pre-train language models that captures richer contextual information, leading to substantial improvements across many NLP tasks. By simplifying the fine-tuning process and setting new performance standards, BERT has become a foundational model that accelerates research and applications in natural language understanding.
Original Abstract
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.