ArXiv TLDR

Accurate and Efficient Statistical Testing for Word Semantic Breadth

🐦 Tweet
2605.08048

Yo Ehara

cs.CL

TLDR

This paper introduces a Householder-aligned permutation test to accurately compare word semantic breadth, reducing Type-I error and achieving significant speedup.

Key contributions

  • Identifies Type-I error inflation in naive semantic breadth comparisons due to directional differences.
  • Proposes a Householder-aligned permutation test to isolate true dispersion differences in word meaning.
  • Achieves calibrated, non-parametric p-values by aligning mean directions of word types.
  • GPU-optimized implementation yields a 23x speedup and 32.5% Type-I error reduction.

Why it matters

Accurate measurement of word semantic breadth is crucial for thesauri and dictionary construction. This method provides a robust statistical tool to compare word meanings, ensuring genuine differences are detected without false positives. Its efficiency makes it practical for large-scale linguistic analysis.

Original Abstract

Measuring the breadth of a word's meaning, or its spread across contexts, has become feasible with contextualized token embeddings. A word type can be represented as a cloud of token vectors, with dispersion-based statistics serving as proxies for contextual diversity (Nagata and Tanaka-Ishii, ACL2025). These measurements are useful for deciding appropriate sense distinctions when constructing thesauri and domain-specific dictionaries. However, when comparing the breadth of two word types, naive hypothesis testing on dispersion can be misleading: differences in semantic direction can masquerade as dispersion differences, inflating Type-I error and yielding "statistically significant" outcomes even when there is no true breadth difference. This is problematic because significance testing should distinguish genuine effects from incidental fluctuations in small-difference regimes. We propose a Householder-aligned permutation test to isolate dispersion differences from directional differences. Our method applies a single Householder reflection to align the mean directions of the two word types and then performs a permutation test on the aligned token clouds, yielding calibrated, non-parametric p-values. For practicality, we introduce a GPU-oriented implementation that batches permutations and linear algebra operations. Empirically, our alignment reduced Type-I error by 32.5% while preserving sensitivity to genuine breadth differences, and achieved a 23x speedup over the CPU baseline.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.