Abstract
AbstractFinding methods for the optimization of weights in feedforward neural networks has become an ongoing developmental process in connectionist research. The current focus on finding new methods for the optimization of weights is mostly the result of the slow and unreliable convergence properties of the gradient descent optimization used in the original back‐propagation algorithm. More accurate and computationally expensive second‐order gradient methods have displaced earlier first‐order gradient optimization of the network connection weights. The global, extended Kalman filter is among the most accurate and computationally expensive of these second‐order weight optimization methods. The iterative, second‐order nature of the filter results in a large number of calculations for each sweep of the training set. This can increase the training time dramatically when training is conducted with data sets that contain large numbers of training patterns. In this paper an adaptive variant of the global, extended Kalman filter that exhibits substantially improved convergence properties is presented and discussed. The adaptive mechanism permits more rapid convergence of network training by identifying data that contain redundant information and avoiding calculations based on this redundant information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.