On the Proper Treatment of Units in Surprisal Theory
Samuel Kiegeland, Vésteinn Snæbjarnarson, Tim Vieira, Ryan Cotterell
TLDR
This paper disentangles unit definition from tokenization in surprisal theory, proposing a unified framework for consistent linguistic analysis.
Key contributions
- Identifies conflation of linguistic units and model tokens in surprisal theory.
- Introduces a unified framework for surprisal across diverse unit inventories.
- Argues for explicit definition of units, treating tokenization as an implementation detail.
Why it matters
Current surprisal analyses suffer from ambiguity in defining linguistic units versus model tokens, leading to inconsistent results. This paper offers a unified framework that clarifies these distinctions, enhancing the rigor and interpretability of psycholinguistic research.
Original Abstract
Surprisal theory links human processing effort to the predictability of an upcoming linguistic unit, but empirical work often leaves the notion of a unit underspecified. In practice, experimental stimuli are segmented into linguistically motivated units (e.g., words), while pretrained language models assign probability mass to a fixed token alphabet that typically does not align with those units. As a result, surprisal-based predictors depend implicitly on ad hoc procedures that conflate two distinct modeling choices: the definition of the unit of analysis and the choice of regions of interest over which predictions are evaluated. In this paper, we disentangle these choices and give a unified framework for reasoning about surprisal over arbitrary unit inventories. We argue that surprisal-based analyses should make these choices explicit and treat tokenization as an implementation detail rather than a scientific primitive.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.