ArXiv TLDR

The signal is the ceiling: Measurement limits of LLM-predicted experience ratings from open-ended survey text

🐦 Tweet
2604.19645

Andrew Hong, Jason Potteiger, Luis E. Zapata

cs.CL

TLDR

LLM performance in predicting experience ratings from survey text is more limited by input text character than prompt design or model choice.

Key contributions

  • Prompt customization improved GPT 4.1 accuracy by 2% (from 67% to 69%) for experience rating prediction.
  • Newer models (GPT 5.2, 4.1-mini) degraded performance compared to the optimized GPT 4.1 configuration.
  • Input text's linguistic character had a significantly greater impact on accuracy than prompt or model selection.
  • Identifies two performance ceilings: model reading bias (correctable by prompt) and inherent missing text information.

Why it matters

This paper clarifies the specific, limited impact of prompt engineering and model selection on LLM performance for subjective text analysis. It reveals that inherent input data limitations are the primary bottleneck, guiding future research.

Original Abstract

An earlier paper (Hong, Potteiger, and Zapata 2026) established that an unoptimized GPT 4.1 prompt predicts fan-reported experience ratings within one point 67% of the time from open-ended survey text. This paper tests the relative impact of prompt design and model selection on that performance. We compared four configurations on approximately 10,000 post-game surveys from five MLB teams: the original baseline prompt and a moderately customized version, crossed with three GPT models (4.1, 4.1-mini, 5.2). Prompt customization added roughly two percentage points of within +/-1 agreement on GPT 4.1 (from 67% to 69%). Both model swaps from that best configuration degraded performance: GPT 5.2 returned to the baseline, and GPT 4.1-mini fell six percentage points below it. Both levers combined were dwarfed by the input itself: across capable configurations, accuracy varied more than an order of magnitude more by the linguistic character of the text than by the choice of prompt or model. The ceiling has two parts. One is a bias in how the model reads text, which prompt design can correct. The other is a difference between what fans write about and what they actually decide, which no engineering can close because the missing information is not in the text. Prompt customization moved the first part; model selection moved neither reliably. The result is not that "prompt engineering helps a little" but that prompt engineering helps in a specific and predictable way, on the part of the ceiling it can reach.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.