Abstract

The rapid advance of scientific research in data mining has led to the adaptation of conventional pattern extraction methods to the context of time series analysis. The forecasting (or prediction) task has been supported mainly by regression algorithms based on artificial neural networks, support vector machines, and <i>k</i>-Nearest Neighbors (<i>k</i>NN). However, some studies provided empirical evidence that similarity-based methods, <i>i.e</i>. variations of <i>k</i>NN, constitute a promising approach compared with more complex predictive models from both machine learning and statistics. Although the scientific community has made great strides in increasing the visibility of these easy-to-fit and impressively accurate algorithms, previous work has failed to recognize the right invariances needed for this task.We propose a novel extension of <i>k</i>NN, namely <i>k</i>NN - Time Series Prediction with Invariances (<i>k</i>NN-TSPI), that differs from the literature by combining techniques to obtain amplitude and offset invariance, complexity invariance, and treatment of trivial matches. Our predictor enables more meaningful matches between reference queries and data subsequences. From a comprehensive evaluation with real-world datasets, we demonstrate that <i>k</i>NN-TSPI is a competitive algorithm against two conventional similarity-based approaches and, most importantly, against 11 popular predictors. To assist future research and provide a better understanding of similarity-based method behaviors, we also explore different settings of <i>k</i>NN-TSPI regarding invariances to distortions in time series, distance measures, complexity-invariant distances, and ensemble functions. Results show that <i>k</i>NN-TSPI stands out for its robustness and stability both concerning the parameter <i>k</i> and the accuracy of the projection horizon trends.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call