From Ground Truth to Measurement: A Statistical Framework for Human Labeling
Robert Chew, Stephanie Eckman, Christoph Kern, Frauke Kreuter
TLDR
This paper introduces a statistical framework to decompose human labeling outcomes into interpretable sources of variation, improving data quality understanding.
Key contributions
- Reframes human annotation as a measurement process, moving beyond treating all disagreement as noise.
- Introduces a statistical framework to decompose labeling outcomes into interpretable sources of variation.
- Identifies four key sources: instance difficulty, annotator bias, situational noise, and relational alignment.
- Provides a diagnostic to assess which error regime (shared/individualized truth) best fits a given task.
Why it matters
Human labeling introduces systematic variation, often obscuring what ML models truly learn. This framework provides a deeper understanding of data quality by decomposing these variations, enabling more robust data-centric machine learning. It guides the development of a systematic science of labeling.
Original Abstract
Supervised machine learning assumes that labeled data provide accurate measurements of the concepts models are meant to learn. Yet in practice, human labeling introduces systematic variation arising from ambiguous items, divergent interpretations, and simple mistakes. Machine learning research commonly treats all disagreement as noise, which obscures these distinctions and limits our understanding of what models actually learn. This paper reframes annotation as a measurement process and introduces a statistical framework for decomposing labeling outcomes into interpretable sources of variation: instance difficulty, annotator bias, situational noise, and relational alignment. The framework extends classical measurement-error models to accommodate both shared and individualized notions of truth, reflecting traditional and human label variation interpretations of error, and provides a diagnostic for assessing which regime better characterizes a given task. Applying the proposed model to a multi-annotator natural language inference dataset, we find empirical evidence for all four theorized components and demonstrate the effectiveness of our approach. We conclude with implications for data-centric machine learning and outline how this approach can guide the development of a more systematic science of labeling.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.