ArXiv TLDR

FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness

🐦 Tweet
2205.14135

Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré

cs.LG

TLDR

FlashAttention is an IO-aware exact attention algorithm that significantly speeds up Transformer training and enables longer context lengths by optimizing GPU memory access patterns.

Key contributions

  • Introduces FlashAttention, an IO-aware exact attention method that reduces GPU memory reads/writes using tiling and SRAM optimization.
  • Achieves substantial speedups over existing baselines: 15% faster on BERT-large, 3× on GPT-2 (1K tokens), and 2.4× on long-range tasks (1K-4K tokens).
  • Extends to block-sparse attention, enabling faster approximate attention and supporting much longer sequences with improved model quality and new capabilities.

Why it matters

This paper addresses the critical bottleneck of quadratic memory and compute complexity in Transformer self-attention by focusing on IO efficiency between GPU memory hierarchies. By optimizing memory access patterns rather than approximating attention, FlashAttention delivers exact results with significant speed and memory improvements. This enables training of larger, higher-quality models with longer context windows, unlocking new capabilities in long-sequence understanding tasks that were previously infeasible due to hardware constraints.

Original Abstract

Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-aware -- accounting for reads and writes between levels of GPU memory. We propose FlashAttention, an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of FlashAttention, showing that it requires fewer HBM accesses than standard attention, and is optimal for a range of SRAM sizes. We also extend FlashAttention to block-sparse attention, yielding an approximate attention algorithm that is faster than any existing approximate attention method. FlashAttention trains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3$\times$ speedup on GPT-2 (seq. length 1K), and 2.4$\times$ speedup on long-range arena (seq. length 1K-4K). FlashAttention and block-sparse FlashAttention enable longer context in Transformers, yielding higher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge (seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy).

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.