Abstract
Long Short-Term Memory (LSTM) neural networks must consider parameter settings for more accurate prediction results. This study aims to improve the LSTM performance using Particle Swarm Optimization (PSO) for hyperparameters selection. PSO may optimize the weight of the LSTM neural network, which results in minimal error prediction due to its feature selection and data normalization abilities. This study applies the LSTM architecture, consisting of 3 layers (input, hidden, and output) with several 7, 3, and 1. Root mean squared error (RMSE) and processing time evaluate the results. The results show that PSO tuning obtained six hyperparameters values, namely RMSprop, ReLU, MAE, batch size (128), Neuron (78), and Epoch (62). The RMSE of the LSTM with PSO tuning has a minor error value of 21.923 and the fastest processing time of 156 seconds. The results overcome other LSTM models with random parameters. The highest difference of RMSE between the proposed approach and the baseline model is 7.621. In other words, the PSO hyperparameters tuning efficiently reduces the LSTM errors. These findings could be useful as a recommendation for reducing the time-consuming efforts of hyperparameters selection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.