Elastic Attention Cores for Scalable Vision Transformers
Alan Z. Song, Yinjie Chen, Mu Nan, Rui Zhang, Jiahang Cao + 6 more
TLDR
VECA introduces elastic core-periphery attention for Vision Transformers, achieving linear-time complexity and competitive performance with learned core tokens.
Key contributions
- Proposes VECA, a Vision Transformer with efficient linear-time core-periphery attention.
- Uses a small set of learned 'core' tokens as a communication interface for image patches.
- Achieves O(N) complexity by having patches interact only through resolution-invariant cores.
- Enables elastic trade-off between compute and accuracy during inference via nested training.
Why it matters
Vision Transformers struggle with high-resolution images due to quadratic scaling. VECA offers a scalable solution by rethinking attention, making ViTs practical for more demanding visual tasks. This innovation paves the way for efficient, high-performance vision models.
Original Abstract
Vision Transformers (ViTs) achieve strong data-driven scaling by leveraging all-to-all self-attention. However, this flexibility incurs a computational cost that scales quadratically with image resolution, limiting ViTs in high-resolution domains. Underlying this approach is the assumption that pairwise token interactions are necessary for learning rich visual-semantic representations. In this work, we challenge this assumption, demonstrating that effective visual representations can be learned without any direct patch-to-patch interaction. We propose VECA (Visual Elastic Core Attention), a vision transformer architecture that uses efficient linear-time core-periphery structured attention enabled by a small set of learned cores. In VECA, these cores act as a communication interface: patch tokens exchange information exclusively through the core tokens, which are initialized from scratch and propagated across layers. Because the $N$ image patches only directly interact with a resolution invariant set of $C$ learned "core" embeddings, this yields linear complexity $O(N)$ for predetermined $C$, which bypasses quadratic scaling. Compared to prior cross-attention architectures, VECA maintains and iteratively updates the full set of $N$ input tokens, avoiding a small $C$-way bottleneck. Combined with nested training along the core axis, our model can elastically trade off compute and accuracy during inference. Across classification and dense tasks, VECA achieves performance competitive with the latest vision foundation models while reducing computational cost. Our results establish elastic core-periphery attention as a scalable alternative building block for Vision Transformers.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.