ArXiv TLDR

Better Models, Faster Training: Sigmoid Attention for single-cell Foundation Models

🐦 Tweet
2604.27124

Vijay Sadashivaiah, Georgios Dasoulas, Judith Mueller, Soumya Ghosh

cs.LGq-bio.QM

TLDR

Sigmoid attention improves single-cell foundation models by providing better representations, faster training, and enhanced stability compared to softmax.

Key contributions

  • Sigmoid attention yields 25% higher cell-type separation and better cohesion in single-cell models.
  • Achieves up to 10% faster and more stable training by eliminating softmax's inherent instabilities.
  • Theoretically grounded with bounded derivatives and a diagonal Jacobian, preventing gradient explosions.
  • Introduces TritonSigmoid, an efficient GPU kernel (515 TFLOPS) outperforming FlashAttention-2.

Why it matters

This paper introduces sigmoid attention, a drop-in replacement for softmax that significantly enhances the performance and stability of single-cell biological foundation models. Its theoretical underpinnings and empirical superiority, coupled with an optimized GPU kernel, make it a crucial advancement for large-scale biological sequence processing.

Original Abstract

Training stable biological foundation models requires rethinking attention mechanisms: we find that using sigmoid attention as a drop in replacement for softmax attention a) produces better learned representations: on six diverse single-cell datasets, sigmoid achieves 25% higher cell-type separation, better cell-type cohesion metrics, and lower validation loss, b) faster training, models with sigmoid attention train up to 10% faster than their softmax counterparts, and c) more stable training by eliminating inherent sources of instability in softmax attention. We establish that sigmoid attention has globally bounded derivatives ($\leq 0.25$) as opposed to softmax, and a diagonal Jacobian structure in contrast with softmax's dense coupling, which together help alleviate training instabilities. In stress tests on 160M-parameter bidirectional attention models trained without gradient clipping on 8K-token sequences, softmax diverges catastrophically, with gradients exploding by four orders of magnitude, while sigmoid remains stable. Finally, we implement and open-source TritonSigmoid, an efficient GPU kernel that achieves 515 TFLOPS on H100 GPUs, outperforming both FlashAttention-2 and FlashSigmoid, with native padding support, which is essential for biological sequences. Our results establish sigmoid attention as both theoretically grounded and empirically superior for biological foundation models. Code is available at https://github.com/MSDLLCpapers/triton-sigmoid

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.