Abstract

Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications have been proposed to improve the performance of BP, and BP with Magnified Gradient Function (MGFPROP) is one of the fast learning algorithms which improve both the convergence rate and the global convergence capability of BP [19]. MGFPROP outperforms many benchmarking fast learning algorithms in different adaptive problems [19]. However, the performance of MGFPROP is limited due to the error overshooting problem. This paper presents a new approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. 2P-MGFPROP is modified from MGFPROP. It divides the learning process into two phases and adjusts the parameter setting of MGFPROP based on the nature of the phase of the learning process. Through simulation results in two different adaptive problems, 2P-MGFPROP outperforms MGFPROP with optimal parameter setting in terms of the convergence rate, and the improvement can be up to 50%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.