ArXiv TLDR

Investigation into In-Context Learning Capabilities of Transformers

🐦 Tweet
2604.25858

Rushil Chandrupatla, Leo Bangayan, Sebastian Leng, Arya Mazumdar

cs.LGcs.AI

TLDR

Empirically maps how dimensionality, examples, and pre-training tasks affect transformer in-context learning, identifying success conditions and benign overfitting.

Key contributions

  • Systematically studies in-context learning (ICL) in transformers using Gaussian-mixture binary classification.
  • Maps ICL accuracy's dependence on input dimension, in-context examples, and pre-training tasks.
  • Investigates benign overfitting, where models memorize noisy labels but generalize well on clean data.
  • Identifies parameter regions where benign overfitting emerges across diverse data geometries and training.

Why it matters

This paper provides a comprehensive empirical map of in-context learning's scaling behavior in transformers. It clarifies how dimensionality, signal strength, and contextual information critically determine ICL success or failure. This offers key insights into model robustness and generalization.

Original Abstract

Transformers have demonstrated a strong ability for in-context learning (ICL), enabling models to solve previously unseen tasks using only example input output pairs provided at inference time. While prior theoretical work has established conditions under which transformers can perform linear classification in-context, the empirical scaling behavior governing when this mechanism succeeds remains insufficiently characterized. In this paper, we conduct a systematic empirical study of in-context learning for Gaussian-mixture binary classification tasks. Building on the theoretical framework of Frei and Vardi (2024), we analyze how in-context test accuracy depends on three fundamental factors: the input dimension, the number of in-context examples, and the number of pre-training tasks. Using a controlled synthetic setup and a linear in-context classifier formulation, we isolate the geometric conditions under which models successfully infer task structure from context alone. We additionally investigate the emergence of benign overfitting, where models memorize noisy in-context labels while still achieving strong generalization performance on clean test data. Through extensive sweeps across dimensionality, sequence length, task diversity, and signal-to-noise regimes, we identify the parameter regions in which this phenomenon arises and characterize how it depends on data geometry and training exposure. Our results provide a comprehensive empirical map of scaling behavior in in-context classification, highlighting the critical role of dimensionality, signal strength, and contextual information in determining when in-context learning succeeds and when it fails.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.