Abstract

We introduce and analyze a simpler, practically useful variant of multivariate singular spectrum analysis (mSSA), a known time series method to impute (or de-noise) and forecast a multivariate time series. Towards this, we introduce a spatio-temporal factor model to analyze mSSA. This model includes the usual components used to model dynamics in time series analysis, such as trends (low order polynomials), seasonality (finite sum of harmonics), and linear time-invariant systems. We establish that given N time series and T observations per time series, the in-sample prediction error for both imputation and forecasting under mSSA scales as 1/√ min(N, T)T. This is an improvement over: (i) the 1/√T error scaling of SSA, which is the restriction of mSSA to univariate time series; (ii) the 1/min(N, T) error scaling for Temporal Regularized Matrix Factorized (TRMF), a matrix factorization based method for time series prediction. That is, mSSA exploits both the 'temporal' and 'spatial' structure in a multivariate time series. Our experimental results using various benchmark datasets confirm the characteristics of the spatio-temporal factor model and our theoretical findings---our variant of mSSA empirically performs as well or better compared to neural network based time series methods, LSTM and DeepAR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call