Abstract
This paper extends the standard approach of combining forecast by proposing weights which are based on ranking the performance of forecast accuracy measures of models. These weights became necessary due to the problems associated with the Akaike weights, equal weights and forecast from the ‘best’ model selected by the minimum AICc value; which are pointed out in this study. According to a selection criterion, five models were fitted to the simulated dataset with two different sample sizes, n=25 and n=200. The results revealed that the mean squared forecast error (MSFE) from the combined forecast of the proposed weights (weighted ranking procedure) outperformed all other approaches that were investigated in this study. Furthermore, the three combined forecast approaches consistently outperformed the forecast from the best model selected by the minimum AICc. Thus, we recommend the use of the weighted ranking procedure in combining models. Tropical Agricultural Research Vol. 26 (3): 486 – 496 (2015)
Highlights
In time series analysis, one major interest is to be able to forecast the future values of a series from a ‘best’ model
To overcome the above problems, we propose a weight whose estimation is not based on information criteria and the assumption of equality in forecast performance is desirable
The problems or challenges associated with the Akaike weights, equal weights for combining forecasts and forecast from a single ‘best’ model selected by the minimum AICc value are pointed out in this study
Summary
One major interest is to be able to forecast the future values of a series from a ‘best’ model. To overcome the above problems, we propose a weight whose estimation is not based on information criteria and the assumption of equality in forecast performance is desirable. Propose a weight based on ranking the forecast or predictive performance of all competing models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have