Abstract

AbstractForecasts are pervasive in all areas of applications in business and daily life. Hence evaluating the accuracy of a forecast is important for both the generators and consumers of forecasts. There are two aspects in forecast evaluation: (a) measuring the accuracy of past forecasts using some summary statistics, and (b) testing the optimality properties of the forecasts through some diagnostic tests. On measuring the accuracy of a past forecast, this paper illustrates that the summary statistics used should match the loss function that was used to generate the forecast. If there is strong evidence that an asymmetric loss function has been used in the generation of a forecast, then a summary statistic that corresponds to that asymmetric loss function should be used in assessing the accuracy of the forecast instead of the popular root mean square error or mean absolute error. On testing the optimality of the forecasts, it is demonstrated how the quantile regressions set in the prediction–realization framework of Mincer and Zarnowitz (in J. Mincer (Ed.), Economic Forecasts and Expectations: Analysis of Forecasting Behavior and Performance (pp. 14–20), 1969) can be used to recover the unknown parameter that controls the potentially asymmetric loss function used in generating the past forecasts. Finally, the prediction–realization framework is applied to the Federal Reserve's economic growth forecast and forecast sharing in a PC manufacturing supply chain. It is found that the Federal Reserve values overprediction approximately 1.5 times more costly than underprediction. It is also found that the PC manufacturer weighs positive forecast errors (under forecasts) about four times as costly as negative forecast errors (over forecasts).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call