ArXiv TLDR

A Bayesian Perspective on the Role of Epistemic Uncertainty for Delayed Generalization in In-Context Learning

🐦 Tweet
2604.12434

Abdessamed Qchohi, Simone Rossi

stat.MLcs.LG

TLDR

This paper shows that epistemic uncertainty sharply collapses during grokking in in-context learning, providing a label-free diagnostic for generalization.

Key contributions

  • Uses a Bayesian perspective to study grokking and delayed generalization in in-context learning.
  • Analyzes predictive uncertainty evolution in transformers on modular arithmetic tasks.
  • Discovers epistemic uncertainty sharply collapses during grokking, serving as a generalization diagnostic.
  • Provides theoretical support linking grokking time and uncertainty dynamics via a spectral mechanism.

Why it matters

Understanding grokking and delayed generalization in transformers is crucial for reliable AI. This paper provides a novel, label-free diagnostic for generalization by observing epistemic uncertainty collapse. It also offers theoretical insights into the spectral mechanisms linking grokking to uncertainty dynamics.

Original Abstract

In-context learning enables transformers to adapt to new tasks from a few examples at inference time, while grokking highlights that this generalization can emerge abruptly only after prolonged training. We study task generalization and grokking in in-context learning using a Bayesian perspective, asking what enables the delayed transition from memorization to generalization. Concretely, we consider modular arithmetic tasks in which a transformer must infer a latent linear function solely from in-context examples and analyze how predictive uncertainty evolves during training. We combine approximate Bayesian techniques to estimate the posterior distribution and we study how uncertainty behaves across training and under changes in task diversity, context length, and context noise. We find that epistemic uncertainty collapses sharply when the model groks, making uncertainty a practical label-free diagnostic of generalization in transformers. Additionally, we provide theoretical support with a simplified Bayesian linear model, showing that asymptotically both delayed generalization and uncertainty peaks arise from the same underlying spectral mechanism, which links grokking time to uncertainty dynamics.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.