Abstract

Artificial neural networks are most often trained using backward error propagation (BEP), which works quite well for network training problems having a single minimum in the error function. Although BEP has been successful in many applications, there can be substantial problems in convergence because of the existence of local minima and network paralysis. We describe a method for avoiding local minima by combining very fast simulated reannealing (VFSR) with BEP. While convergence to the best training weights can be slower than gradient descent methods, it is faster than other SA network training methods. More importantly, convergence to the optimal weight set is guaranteed. We demonstrate VFSR network training on a variety of test problems, such as the exclusive-or and parity problems, and compare performances of VFSR network training with conjugate gradient trained backpropagation networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.