Abstract

This paper presents a single-objective and a multiobjective stochastic optimization algorithms for global training of neural networks based on simulated annealing. The algorithms overcome the limitation of local optimization by the conventional gradient-based training methods and perform global optimization of the weights of the neural networks. Especially, the multiobjective training algorithm is designed to enhance generalization capability of the trained networks by minimizing the training error and the dynamic range of the network weights simulataneously. For fast convergence and good solution quality of the algorithms, we suggest the hybrid simulated annealing algorithm with the gradient-based local optimization method. Experimental results show that the performance of the trained networks by the proposed methods is better than that by the gradient-based local training algorithm and, moreover, the generalization capability of the networks is significantly improved by preventing overfitting phenomena.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call