Task Vector Geometry Underlies Dual Modes of Task Inference in Transformers
Hao Yan, Haolin Yang, Yiqiao Zhong
TLDR
This paper reveals how task vector geometry in transformers enables both in-distribution task retrieval and out-of-distribution adaptation.
Key contributions
- Studies task inference in transformers using a controlled synthetic setting.
- Demonstrates that in-distribution task retrieval and out-of-distribution adaptation coexist in one model.
- In-distribution behavior is governed by Bayesian task retrieval via convex combinations of task vectors.
- Out-of-distribution behavior arises from extrapolative learning in a subspace orthogonal to task vectors.
Why it matters
This paper provides a rigorous mathematical foundation for understanding how transformers infer tasks. By clarifying the role of task vector geometry, it sheds light on how models generalize to both familiar and novel situations. This work is crucial for developing more robust and interpretable AI systems.
Original Abstract
Transformers are effective at inferring the latent task from context via two inference modes: recognizing a task seen during training, and adapting to a novel one. Recent interpretability studies have identified from middle-layer representations task-specific directions, or task vectors, that steer model behavior. However, a lack of rigorous foundations hinders connecting internal representations to external model behavior: existing work fails to explain how task-vector geometry is shaped by the training distribution, and what geometry enables out-of-distribution (OOD) generalization. In this paper, we study these questions in a controlled synthetic setting by training small transformers from scratch on latent-task sequence distributions, which allows a principled mathematical characterization. We show that two inference modes can coexist within a single model. In-distribution behavior is governed by Bayesian task retrieval, implemented internally through convex combinations of learned task vectors. OOD behavior, by contrast, arises through extrapolative task learning, whose representations occupy a subspace nearly orthogonal to the task-vector subspace. Taken together, our results suggest that task-vector geometry, training distributions, and generalization behaviors are closely related.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.