Who Saw It Coming? Historical Experience and the 2021 Inflation Forecast Failure
TLDR
2021 US inflation forecasts failed due to sample composition, not model misspecification; historical data adjustments and experience-based priors improve accuracy.
Key contributions
- Sample composition, not model misspecification, caused 2021 US inflation forecast failure.
- Historically informed adjustments (e.g., 1970s data re-estimation) substantially close the forecast gap.
- Older individuals (over 60) with 1970s experience predicted higher inflation, showing experience-based learning.
- LLMs with "experienced" priors generated better forecasts, highlighting prior's importance over model sophistication.
Why it matters
This paper highlights the critical role of historical context and experiential priors in economic forecasting, especially during regime shifts. It suggests that relying solely on recent data can lead to significant forecast errors. Understanding these biases is crucial for improving future economic models and policy decisions.
Original Abstract
This paper studies the 2021 U.S. inflation forecasting failure. I show that the failure was primarily driven by sample composition rather than functional-form misspecification: estimation samples dominated by the Great Moderation underweight supply-shock regimes, and expectations anchored to that regime were slow to recognize the shift. Three historically informed adjustments, an intercept correction, a similarity re-estimation on 1970s data, and a kernel-weighted estimator, substantially close the forecast gap, and the gains extend to eight additional U.S. price indices. Household survey respondents over 60, whose lifetime includes the 1970s, reported higher inflation expectations from early 2021, consistent with experience-based learning; younger cohorts remained anchored to the prevailing regime. A controlled experiment with large language models conditioned on ``experienced'' and ``young'' professional personas confirms that experiential priors generate significant forecast differences under a common training leakage assumption. Across all three exercises, the source of the prior mattered more than the sophistication of the model.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.