Abstract

This paper describes a new technique for training feedforward neural networks. We employ the proposed algorithm for robust neural network training purpose. Conventional neural network training algorithms based on the gradient descent often encounter local minima problems. Recently, some evolutionary algorithms are getting a lot more attention about global search ability but are less-accurate for complicated training task of neural networks. The proposed technique hybridizes local training algorithm based on quasi-Newton method with a recent global optimization algorithm called particle swarm optimization (PSO). The proposed technique provides higher global convergence property than the conventional global optimization technique. Neural network training for some benchmark problems is presented to demonstrate the proposed algorithm. The proposed algorithm achieves more accurate and robust training results than the quasi-Newton method and the conventional PSOs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.