ArXiv TLDR

The Surprising Universality of LLM Outputs: A Real-Time Verification Primitive

🐦 Tweet
2604.25634

Alex Bogdan, Adrian de Valois-Franklin

cs.CRcs.CL

TLDR

LLM outputs exhibit a universal statistical pattern, enabling a CPU-only verification primitive that is 100,000x faster than current methods.

Key contributions

  • LLM token rank-frequency distributions converge to a universal two-parameter Mandelbrot distribution.
  • This enables a CPU-only scoring primitive, 100,000x faster than existing methods, for real-time verification.
  • Allows statistical model fingerprinting to verify text provenance without watermarks or internal model access.
  • Provides a model-agnostic reference for black-box output assessment, identifying lexical anomalies and unsupported entities.

Why it matters

This paper reveals a fundamental statistical property of LLM outputs, enabling extremely fast, resource-efficient verification. It offers a novel way to fingerprint models and assess output quality, significantly improving the efficiency of LLM evaluation.

Original Abstract

We report a striking statistical regularity in frontier LLM outputs that enables a CPU-only scoring primitive running at 2.6 microseconds per token, with estimated latency up to 100,000$\times$ (five orders of magnitude) below existing sampling-based detectors. Across six contemporary models from five independent vendors, two generation sizes, and five held-out domains, token rank-frequency distributions converge to the same two-parameter Mandelbrot ranking distribution, with 34 of 36 model-by-domain fits exceeding $R^{2} = 0.94$ and 35 of 36 favoring Mandelbrot over Zipf by AIC. The shared family does not collapse the models into statistical duplicates. Fitted Mandelbrot parameters remain cleanly separable between models: the cross-model spread in $q$ (1.63 to 3.69) exceeds its per-model bootstrap standard deviation (0.03 to 0.10) by more than an order of magnitude, yielding tens of standard deviations of separation per few thousand output tokens. Two capabilities follow. First, statistical model fingerprinting: text from a vendor-delivered LLM can be tested against its claimed model family without cryptographic watermarks or access to model internals, supporting provenance verification and silent-substitution audits. Second, a model-agnostic reference distribution for black-box output assessment, from which we derive a single-pass scoring primitive that composes with model log probabilities when available and degrades to a rank-only mode usable on closed APIs. Pilot results on FRANK, TruthfulQA, and HaluEval map where the primitive helps (lexical anomalies, unsupported entities) and where it structurally cannot (reasoning errors in domain-appropriate vocabulary). We position the primitive as a first-pass triage layer in compound evaluation stacks, not as a replacement for sampling-based or source-conditioned verifiers.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.