Abstract

Time series forecasting is an important type of quantitative model used to predict future values given a series of past observations for which the generation process is unknown. Two of the most well‐known methods for the modeling of linear time series are the autoregressive integrated moving average (ARIMA) and the autoregressive fractionally integrated moving average (ARFIMA). For different datasets, the number of past observations necessary for an accurate prediction may vary. Short and long memory dependency problems require different handling, with the ARIMA model being limited to the first, while the ARFIMA model was specifically developed for the latter. Preprocessing techniques and modification on specific components of these models are common approaches used to tackle the memory dependency problem in order to improve their accuracy. However, such solutions are specific to certain datasets. This paper proposes a new method that combines the short and long memory characteristics of the two aforementioned models in order to keep a low accumulative error in several different scenarios. Twelve public time series datasets were used to compare the performance of the proposed method with the original models. The results were also compared with two alternative methods from the literature used to deal with datasets of different memory dependencies. The new approach presented a lower error for the majority of the experiments, failing only for the datasets that contain a large number of features. © 2020 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call