Abstract

Recurrent Neural Networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs) are often deployed as neural network-based predictors for time series data. Recently, Hierarchical Temporal Memory (HTM), a machine learning technology attempting to simulate the human brain’s neocortex, has been proposed as another approach to time series data prediction. While HTM has gained a lot of attention, little is known about the actual performance compared to the more common RNNs. The only performance comparison between the two, performed at the company behind HTM, shows they perform similarly. In this article, we present a more in-depth performance comparison, involving more extensive hyperparameter tuning and evaluation on more scenarios. Surprisingly, our results show that both LSTM and GRUs can outperform HTM by over 30% at lower runtime. Furthermore, we show that HTM requires explicitly timestamped data to recognize daily and weekly patterns, while LSTM only needs the raw sequential data to predict such time series accurately. Finally, our experiments indicate that the temporally aware components of all considered predictors contribute nothing to the prediction accuracy. We further strengthen this claim by presenting equally or better performing Multilayer Perceptrons conceptually similar to the HTM and LSTM, disregarding their temporal aspects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call