In comparing the accuracy of any two competing forecasts, under the assumption that the loss differentials are covariance stationary, Diebold and Mariano (DM) proposed a test statistic that is asymptotically normal. This test is further studied by Giacomini and White (GW). However, under the DM-GW framework, the estimator of the variance of the DM test can be inaccurate in small samples, which yields size distortions; see, for example, Coroneo and Iacone (CI). To alleviate this challenge, we propose a maximum-subsampling (MS) test that does not require estimating the long-run variance induced by a number of autocovariances. Accordingly, the MS test is applicable for comparing the predictive ability between two competing forecasts even when the loss differentials are serially correlated with arbitrary autocovariance structures. We demonstrate that the MS test converges to the type I extreme value distribution under the DM-GW null hypothesis with proper conditions. In addition, under a set of alternative hypotheses, we show the MS test is consistent. We further compare the MS test with the DM test and two CI tests via five simulation settings, which are either modified or adapted from McCracken and Coroneo and Iacone. Simulation results show that the MS test performs satisfactorily.
Read full abstract