Abstract The main problem in volatility forecasting is that the variable of interest is unobservable, which complicates not only the construction of forecasts but also their comparison. This article challenges the common practice of using only proxy-robust loss functions, which have the nice property that they lead to the same ranking of forecasts regardless whether the unobservable true volatility is used or some unbiased proxy. It is shown that two proxy-robust loss functions need not necessarily produce similar rankings but may even produce completely contradictory rankings. Two likelihood-based loss functions are proposed instead, which are not exactly proxy-robust but are still robust in the classical sense. The first is based on a t-distribution and is meant for daily data. The second is based on an F-distribution and is meant for high-frequency data. In the latter case, the squared error loss function may also be used when a logarithmic transformation is applied to the realized variances in order to achieve approximate normality. An alternative transformation is proposed which allows the adaptation to the degree of non-normality. The forecasting procedures that are compared by the different loss functions include GARCH, HAR, HARQ, and MIDAS models as well as nonparametric techniques. Finally, the economic relevance of choosing the right forecast is illustrated with the problem of establishing the intertemporal risk–return tradeoff. All theoretical arguments are backed up with empirical evidence obtained from daily data as well as from high-frequency data.
Read full abstract