Abstract

Optimization problems of a back-propagation neural network (BPNN) can be categorized into optimal network topology (including the number of neurons in a hidden layer, learning rate and the momentum term) and weights. This study focuses on the optimization of weights. The conventional BPNN uses the steepest descent method, i.e. a local optimization technique, to minimize an energy function (cost function) to find the BPNN weights. Therefore, a conventional BPNN cannot obtain global weights. An advanced simulated annealing (ASA) algorithm is a stochastic global method applied for solving a multi-dimensional objective function with boundary conditions. To overcome the drawback associated with the standard BPNN, this study attempts to optimize the weights of the BPNN using an ASA algorithm. Performance of the proposed ASA-based BPNN (named ASA-BPNN) is evaluated using a benchmark chaotic time series problem, i.e. the Mackey-Glass time series problem. Furthermore, the comparing experimental results for the ASA-BPNN with those of a standard BPNN reveals that training and generalization accuracies of the ASA-BPNN are superior to those of the standard BPNN for the test case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call