wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
TLDR
wav2vec 2.0 introduces a self-supervised learning framework for speech that achieves state-of-the-art recognition performance with significantly less labeled data by leveraging contrastive learning on masked latent speech representations.
Key contributions
- Proposes masking speech input in latent space and solving a contrastive task over quantized latent representations.
- Outperforms previous semi-supervised methods using only self-supervised pretraining plus fine-tuning on limited labeled data.
- Achieves strong word error rates with as little as ten minutes of labeled data after pretraining on large unlabeled speech corpora.
Why it matters
This paper matters because it demonstrates that high-quality speech recognition systems can be built with drastically reduced reliance on expensive labeled data by effectively leveraging large amounts of unlabeled audio through self-supervised learning. This advances the accessibility and scalability of speech technology, especially for low-resource languages and domains.
Original Abstract
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.