Self-normalized tests for multistep conditional predictive ability
TLDR
This paper introduces self-normalized tests for multistep conditional predictive ability, avoiding complex covariance matrix estimation.
Key contributions
- Proposes self-normalized tests for multistep conditional predictive ability in forecast comparison.
- Avoids direct estimation of the long-run covariance matrix, simplifying the testing process.
- Eliminates the need for ad hoc bandwidth, kernel, and lag-truncation choices.
- Mitigates finite-sample size distortions common in traditional HAC methods.
Why it matters
This research offers a more robust and user-friendly method for comparing forecasts, especially in complex multistep scenarios. By removing the need for arbitrary parameter choices, it improves test accuracy and reliability, making it easier for practitioners to evaluate predictive models.
Original Abstract
This paper proposes self-normalized tests for multistep conditional predictive ability in forecast comparison. By normalizing the sample mean of the transformed loss differential using functionals of its cumulative sum (CUSUM) process, specifically an adjusted-range normalizer for scalars and a matrix normalizer for vectors, our approach avoids direct estimation of the long-run covariance matrix. Consequently, it eliminates the need for the ad hoc bandwidth, kernel, and lag-truncation choices required by traditional methods. We establish the asymptotic theory for these statistics, deriving pivotal null limiting distributions and proving test consistency. Monte Carlo simulations show that the proposed tests effectively mitigate the finite-sample size distortions associated with traditional heteroskedasticity and autocorrelation consistent (HAC) methods, while retaining strong empirical power against conditional predictability alternatives.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.