Abstract

SummaryWe develop forecast superiority tests that are robust to the choice of loss function by following Jin, Corradi and Swanson (JCS: 2017), and relying on a mapping between generic loss forecast evaluation and stochastic dominance principles. However, unlike JCS tests, which are not uniformly valid and are correctly sized only under the least favorable case, our tests are uniformly asymptotically valid and non‐conservative. To show this, we establish uniform convergence of HAC variance estimators. Monte Carlo experiments indicate good finite sample performance of our tests, and an empirical illustration suggests that prior forecast accuracy matters in the Survey of Professional Forecasters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call