Abstract

We introduce an advanced optimization algorithm for training feedforward neural networks. The algorithm combines the Broyden-Fletcher-Goldfarb-Shanno (BFGS) Hessian update formula with a special case of trust region techniques, called the Dogleg method, as an alternative technique to line search methods. Simulations regarding classification and function approximation problems are presented which reveal a clear improvement both in convergence and success rates over standard BFGS implementations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.