Abstract

ABSTRACT Optimizing the time-series forecasting performance is a multi-objective problem which enables the comparison of general applicability of methods across multiple use cases such as finance and demographics. Libra, a time-series forecasting framework which shifts the problem of optimization from minimizing single to multiple evaluation measures and use cases, is used as a benchmark to evaluate the performance of the Long Short-Term Memory (LSTM) neural network. LSTMs with parameter tuning have been shown to perform well with time-series forecasting. This paper applies LSTMs (mostly with standard parameters and variations of some of them) to the Libra framework and concludes that due to data characteristic variance and without increased hardware and time constraints LSTMs do not outperform the median measures of Libra.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call