Convergent Evolution: How Different Language Models Learn Similar Number Representations
Deqing Fu, Tianyi Zhou, Mikhail Belkin, Vatsal Sharan, Robin Jia
TLDR
Different language models exhibit convergent evolution, learning similar periodic number representations, though geometric separability requires specific training conditions.
Key contributions
- LMs learn periodic number features with dominant periods 2, 5, and 10, demonstrating convergent evolution.
- A two-tiered hierarchy exists: Fourier domain spikes are common, but geometric separability for mod-T classification is rarer.
- Fourier domain sparsity is necessary but insufficient for geometrically separable mod-T number representations.
- Geometric separability arises from co-occurrence signals or multi-token addition, influenced by data, architecture, and tokenizer.
Why it matters
This paper reveals how diverse language models independently develop similar numerical understanding, highlighting a "convergent evolution" in feature learning. It explains why some models achieve more robust, geometrically separable number representations, offering insights into improving numerical reasoning.
Original Abstract
Language models trained on natural text learn to represent numbers using periodic features with dominant periods at $T=2, 5, 10$. In this paper, we identify a two-tiered hierarchy of these features: while Transformers, Linear RNNs, LSTMs, and classical word embeddings trained in different ways all learn features that have period-$T$ spikes in the Fourier domain, only some learn geometrically separable features that can be used to linearly classify a number mod-$T$. To explain this incongruity, we prove that Fourier domain sparsity is necessary but not sufficient for mod-$T$ geometric separability. Empirically, we investigate when model training yields geometrically separable features, finding that the data, architecture, optimizer, and tokenizer all play key roles. In particular, we identify two different routes through which models can acquire geometrically separable features: they can learn them from complementary co-occurrence signals in general language data, including text-number co-occurrence and cross-number interaction, or from multi-token (but not single-token) addition problems. Overall, our results highlight the phenomenon of convergent evolution in feature learning: A diverse range of models learn similar features from different training signals.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.