Abstract

In univariate time series forecasting, models are typically updated at every single review period. This practice, which includes specifying the optimal form of the model and estimating its parameters, theoretically allows the models to exploit new information and to respond quickly to possible structural breaks. We argue that such updates may be irrelevant in practice, also unnecessarily increasing computational cost and forecast instability. Using two large data sets of monthly and daily series as well as an indicative family of conventional time series models, we investigate several model updating scenarios, ranging from complete model form specification and parameter estimation at every review period to no updating at all. We find that intermediate updating scenarios, including the re-estimation of specific parameters but not necessarily the specification of the model form, can result in similar or even better accuracy with significantly lower computational cost. We also show that similar conclusions hold true for popular machine learning methods, as well as for setups where different approaches are utilized for training the models or accelerating their specification and estimation. We discuss the implications of our findings for manufacturers, suppliers, and retailers and propose avenues for future advances in the area of model frequency updating.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call