ArXiv TLDR

Are Emergent Abilities of Large Language Models a Mirage?

🐦 Tweet
2304.15004

Rylan Schaeffer, Brando Miranda, Sanmi Koyejo

cs.AIcs.LG

TLDR

This paper argues that emergent abilities in large language models arise from the choice of evaluation metrics rather than fundamental changes in model behavior as scale increases.

Key contributions

  • Proposes that nonlinear or discontinuous metrics create the illusion of emergent abilities, while linear or continuous metrics reveal smooth performance changes.
  • Validates this hypothesis through mathematical modeling and empirical tests on InstructGPT/GPT-3 and BIG-Bench tasks.
  • Demonstrates that metric choice can induce apparent emergent abilities even in vision tasks across various deep networks.

Why it matters

Understanding whether emergent abilities are intrinsic to model scaling or artifacts of measurement is crucial for accurately interpreting AI progress and guiding future research. This paper challenges a popular narrative by showing that emergent behaviors may be illusions caused by how performance is quantified, prompting a reevaluation of claims about sudden qualitative leaps in AI capabilities.

Original Abstract

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher's choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous predictable changes in model performance. We present our alternative explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities; (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show to choose metrics to produce never-before-seen seemingly emergent abilities in multiple vision tasks across diverse deep networks. Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.