Abstract

The selection of an accurate performance metric is highly important to evaluate the quality of a forecasting method. This evaluation may help to select between different forecasting tools of forecasting outputs, and then support many decisions within a company. This paper proposes to evaluate the sensitivity and reliability of forecasts performance metrics. The methodology is tested using multiple time series of different scales and demand patterns, such as intermittent demand. The idea is to add to each series a noise following a known distribution to represent forecasting models of a known error distribution. Varying the parameters of the distribution of the noise allows to evaluate how sensitive and reliable performance metrics are to changes in bias and variance of the error of a forecasting model. The experiments concluded that sRMSE is more reliable than MASE in most cases on those series. sRMSE is especially reliable for detecting changes in the variance of a model and sPIS is the most sensitive metric to the bias of a model. sAPIS is sensible to both variance and bias but is less reliable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call