Artificial Neural Networks (ANN) are a field of computer science that mimics the way the human brain processes data. ANNs can be used to classify, estimate, predict, or simulate new data from similar sources. The commonly used algorithm for prediction in ANN is Backpropagation, which yields high accuracy but tends to be slow during the training process and is prone to local minima. To address these issues, appropriate parameters are needed in the Backpropagation training process, such as an optimal learning function. The aim of this study is to evaluate and compare various learning functions within the Backpropagation algorithm to determine the best one for prediction cases. The learning functions evaluated include Gradient Descent Backpropagation (traingd), Gradient Descent with Adaptive Learning Rate (traingda), and Gradient Descent with Momentum and Adaptive Learning Rate (traingdx). The dataset used is the average wholesale rice price in Indonesia, obtained from the Central Statistics Agency (BPS) website. The evaluation results show that the traingdx learning function with a 5-5-1 architecture model achieves the highest accuracy of 83.33%, representing an 8.3% improvement over the traingd and traingda learning functions, which both achieved a maximum accuracy of 75%. Based on this study, it can be concluded that using various learning functions in Backpropagation yields better accuracy compared to standard Backpropagation.
Read full abstract