ArXiv TLDR

Is More Data Worth the Cost? Dataset Scaling Laws in a Tiny Attention-Only Decoder

🐦 Tweet
2604.09389

Götz-Henrik Wiegand, Lorena Raichle, Rico Städeli, Tomas Hrycej, Bernhard Bermeitinger + 1 more

cs.LGcs.CL

TLDR

A study on a tiny attention-only decoder reveals that 30% of training data can achieve 90% of full-data accuracy, confirming dataset scaling laws.

Key contributions

  • Isolated dataset-size effects using a reduced attention-only decoder architecture.
  • Observed smooth performance improvements with diminishing returns as dataset size increased.
  • Found that ~30% of training data yields ~90% of full-data validation token-level accuracy.
  • Confirmed dataset scaling law behavior in a controlled, small-scale setting.

Why it matters

This paper offers practical guidance for optimizing data usage in resource-restricted environments. It shows that 30% of data can achieve 90% of full accuracy, helping small labs balance dataset size and computational cost efficiently.

Original Abstract

Training Transformer language models is expensive, as performance typically improves with increasing dataset size and computational budget. Although scaling laws describe this trend at large scale, their implications in controlled, smaller-scale settings remain less explored. In this work, we isolate dataset-size effects using a strongly reduced attention-only decoder architecture. By training on progressively larger power-of-two subsets, we observe smooth performance improvements accompanied by clear diminishing returns, consistent with scaling-law behavior. Using only about 30% of the training data is sufficient to reach approximately 90% of the full-data validation token-level accuracy. These results provide actionable insights into dataset scaling in a controlled, component-isolated setting and offer practical guidance for balancing dataset size and computational cost in compute- and data-restricted environments, such as small research labs and exploratory model development.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.