ArXiv TLDR

Characterizing the Expressivity of Local Attention in Transformers

🐦 Tweet
2605.00768

Jiaoda Li, Ryan Cotterell

cs.CL

TLDR

This paper formally explains why local attention improves Transformer quality by showing it adds expressive power, making hybrid models superior.

Key contributions

  • Local attention introduces a second temporal operator, strictly enlarging the class of recognizable regular languages.
  • Global and local attention are expressively complementary; neither subsumes the other.
  • Combining global and local attention yields the richest expressive fragment for transformers.
  • Experiments on formal language and NLM tasks validate that hybrid models outperform global-only counterparts.

Why it matters

Local attention often improves Transformer performance, but the reason was unclear. This paper provides a formal, theoretical explanation for this phenomenon. It shows that combining local and global attention unlocks greater expressive power, leading to better models.

Original Abstract

The transformer is the most popular neural architecture for language modeling. The cornerstone of the transformer is its global attention mechanism, which lets the model aggregate information from all preceding tokens before generating the next token. One common variant of attention is called local attention, which restricts each token to aggregating information from a bounded window of predecessors, reducing the quadratic cost of global attention to linear. Although this restriction is usually motivated by efficiency, it has also been found to improve model quality, a phenomenon that has so far lacked a satisfactory explanation. We provide a formal account of this phenomenon in terms of recognizer expressivity. It has been shown that fixed-precision transformers with global attention correspond to a fragment of linear temporal logic containing a single past operator. We additionally prove that adding local attention introduces a second temporal operator, strictly enlarging the class of recognizable regular languages. Moreover, global and local attention are expressively complementary: neither subsumes the other, and combining them yields the richest fragment. Experiments on formal language recognition and natural language modeling corroborate the theory, showing that hybrid global--local transformers outperform their global-only counterparts.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.