Abstract

Prediction is a means of forecasting a future value by using and analyzing historical or current data. A popular neural network architecture used as a prediction model is the Recurrent Neural Network (RNN) because of its wide application and very high generalization performance. This study aims to improve the RNN prediction model’s accuracy using k-means grouping and PCA dimension reduction methods by comparing the five distance functions. Data were processed using Python software and the results obtained from the PCA calculation yielded three new variables or principal components out of the five examined. This study used an optimized RNN prediction model with k-means clustering by comparing the Euclidean, Manhattan, Canberra, Average, and Chebyshev distance functions as a measure of data grouping similarity to avoid being trapped in the local optimal solution. In addition, PCA dimension reduction was also used in facilitating multivariate data analysis. The k-means grouping showed that the most optimal distance is the average function producing a DBI value of 0.60855 and converging at the 9th iteration. The RNN prediction model results evaluated based on the number of RMSE errors which was 0.83, while that of MAPE was 8.62%. Therefore, it was concluded that the K-means and PCA methods generated a more optimal prediction model for the RNN method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.