Abstract

Long Short-Term Memory (LSTM) Neural Network has great potential to predict sequential data. Time series prediction is one of the most popular experimental subjects of LSTM. To this end, various LSTM algorithms have been developed to predict time series data. However, there are a few works considering the hyperparameter optimization of LSTM along with parallelization approaches. To address this problem, a parallelized classic LSTM is proposed to predict time series. In the preprocessing phase, it first replaces missing values with zero and then normalizes the time series matrix. The transposed matrix is divided into training and testing parts. Consequently, a core-based parallelism is established, thereby utilizing forking to split prediction into multiple processes. Derivative-free optimization techniques are also analyzed to find out what sort of hyperparameter optimization technique is much more feasible for a parallelized LSTM. Further, a state-of-the-art comparison is included in the study. Experimental results show that training loss is optimal when using Nelder–Mead. Employing effort-intensive optimization methods such as genetic algorithms results in a remarkable CPU time reduction in parallelized designs according to the results. Last, the proposed algorithm outperformed the comparison methods with respect to the prediction errors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.