ArXiv TLDR

From Model Uncertainty to Human Attention: Localization-Aware Visual Cues for Scalable Annotation Review

🐦 Tweet
2605.12303

Moussa Kassem Sbeyti, Joshua Holstein, Philipp Spitzer, Nadja Klein, Gerhard Satzger

cs.HCcs.CVcs.LG

TLDR

This paper introduces visual cues for spatial uncertainty in AI-assisted annotation, improving label quality and speed by guiding human attention.

Key contributions

  • Addresses mislocalization in AI-assisted annotation where models are confident but spatially inaccurate.
  • Proposes visualizing spatial uncertainty via a purpose-built interface for annotators.
  • Controlled study shows uncertainty cues lead to higher label quality and faster annotation.
  • Cues effectively redirect human effort towards high-uncertainty predictions.

Why it matters

AI-assisted annotation often fails to signal spatial errors, leading to overlooked mislocalizations. This work provides a practical method to improve data quality and efficiency by leveraging model uncertainty to guide human attention. It establishes localization uncertainty as a crucial factor for better human-in-the-loop systems.

Original Abstract

High-quality labeled data is essential for training robust machine learning models, yet obtaining annotations at scale remains expensive. AI-assisted annotation has therefore become standard in large-scale labeling workflows. However, in tasks where model predictions carry two independent components, a class label and spatial boundaries, a model may classify an object with high confidence while mislocalizing it. Existing AI-assisted workflows offer annotators no signal about where spatial errors are most likely. Without such guidance, humans may systematically underinspect subtly misplaced boxes. We address this by studying the effect of visualizing spatial uncertainty via a purpose-built interface. In a controlled study with 120 participants, those receiving uncertainty cues achieve higher label quality while being faster overall. A box-level analysis confirms that the cues redirect annotator effort toward high-uncertainty predictions and away from well-localized boxes. These findings establish localization uncertainty as a lever to improve human-in-the-loop annotation. Code is available at https://mos-ks.github.io/MUHA/.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.