What do Language Models Learn and When? The Implicit Curriculum Hypothesis
Emmy Liu, Kaiser Sun, Millicent Li, Isabelle Lee, Lindia Tjuatja + 2 more
TLDR
LLMs acquire skills during pretraining in a consistent, compositional order, predictable across models and data, revealing a structured learning curriculum.
Key contributions
- Introduces the Implicit Curriculum Hypothesis for LLM pretraining skill acquisition.
- Tracks skill emergence using a suite of simple, composable tasks across model families.
- Discovered skill emergence order is highly consistent across models (ρ= .81) and compositional.
- Demonstrates skill emergence order is encoded in model representations, predicting new task trajectories.
Why it matters
This paper illuminates the black box of LLM pretraining, revealing a structured, predictable skill acquisition process. Understanding this "implicit curriculum" can inform more efficient training strategies and help diagnose model capabilities beyond simple loss curves.
Original Abstract
Large language models (LLMs) can perform remarkably complex tasks, yet the fine-grained details of how these capabilities emerge during pretraining remain poorly understood. Scaling laws on validation loss tell us how much a model improves with additional compute, but not what skills it acquires in which order. To remedy this, we propose the Implicit Curriculum Hypothesis: pretraining follows a compositional and predictable curriculum across models and data mixtures. We test this by designing a suite of simple, composable tasks spanning retrieval, morphological transformations, coreference, logical reasoning, and mathematics. Using these tasks, we track emergence points across four model families spanning sizes from 410M-13B parameters. We find that emergence orderings of when models reach fixed accuracy thresholds are strikingly consistent ($ρ= .81$ across 45 model pairs), and that composite tasks most often emerge after their component tasks. Furthermore, we find that this structure is encoded in model representations: tasks with similar function vector representations also tend to follow similar trajectories in training. By using the space of representations derived from our task set, we can effectively predict the training trajectories of simple held-out compositional tasks throughout the course of pretraining ($R^2 = .68$-$.84$ across models) without previously evaluating them. Together, these results suggest that pretraining is more structured than loss curves reveal: skills emerge in a compositional order that is consistent across models and readable from their internals.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.