ArXiv TLDR

Quantifying the Risk-Return Tradeoff in Forecasting

🐦 Tweet
2605.09712

Philippe Goulet Coulombe

econ.EMq-fin.PMstat.ML

TLDR

This paper introduces a framework to quantify forecast reliability using risk-adjusted financial performance measures, revealing professional forecasters' robust performance.

Key contributions

  • Treats forecast loss differentials as a return series for evaluation.
  • Applies financial risk-adjusted metrics (Sharpe, Sortino, Omega) to assess forecast reliability.
  • Introduces the 'Edge Ratio' to measure uniquely informative predictions.
  • Finds professional forecasters are hard to beat on a risk-adjusted basis despite lower average accuracy.

Why it matters

This framework offers a more robust way to evaluate forecasting models beyond simple accuracy, emphasizing reliability and risk. It provides crucial insights into the value of professional judgment and helps identify models with attractive risk profiles for critical applications.

Original Abstract

Average forecast accuracy is not the same as forecast reliability. I treat forecast loss differentials relative to a benchmark as a return series. I then evaluate these returns using risk-adjusted performance measures from finance, including the Sharpe ratio, Sortino ratio, Omega ratio, and drawdown-based metrics. I also introduce the Edge Ratio capturing a model's propensity to deliver uniquely informative predictions relative to the forecasting frontier. I apply this framework to U.S. macroeconomic forecasting, comparing econometric benchmarks, machine learning models, a foundation model (TabPFN), and the Survey of Professional Forecasters. While it is often feasible to beat professional forecasters in terms of average accuracy, it is much harder to beat them on a risk-adjusted basis. They rarely exhibit catastrophic failures and often achieve high Edge Ratios, plausibly reflecting the value of contextual judgment. Nonetheless, selected machine learning methods deliver attractive risk profiles for specific targets. The framework naturally extends to meta-analyses across targets, horizons, and samples, illustrated with a density forecast evaluation and the M4 competition.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.