Abstract

A good command of computational and statistical tools has proven advantageous when modelling and forecasting time series. According to recent literature, neural networks with long memory (e.g., Short-Term Long Memory) are a promising option in deep learning methods. However, only some works also consider the computational cost of these architectures compared to simpler architectures (e.g., Multilayer Perceptron). This work aims to provide insight into the memory performance of some Deep Neural Network architectures and their computational complexity. Another goal is to evaluate whether choosing more complex architectures with higher computational costs is justified. Error metrics are then used to assess the forecasting models' performance and computational cost. Two-time series related to e-commerce retail sales in the US were selected: (i) sales volume; (ii) e-commerce sales as a percentage of total sales. Although there are changes in data dynamics in both series, other existing characteristics lead to different conclusions. "Long memory" allows for significantly better forecasts in one-time series. In the other time series, this is not the case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call