AbstractFinding methods for the optimization of weights in feedforward neural networks has become an ongoing developmental process in connectionist research. The current focus on finding new methods for the optimization of weights is mostly the result of the slow and unreliable convergence properties of the gradient descent optimization used in the original back‐propagation algorithm. More accurate and computationally expensive second‐order gradient methods have displaced earlier first‐order gradient optimization of the network connection weights. The global, extended Kalman filter is among the most accurate and computationally expensive of these second‐order weight optimization methods. The iterative, second‐order nature of the filter results in a large number of calculations for each sweep of the training set. This can increase the training time dramatically when training is conducted with data sets that contain large numbers of training patterns. In this paper an adaptive variant of the global, extended Kalman filter that exhibits substantially improved convergence properties is presented and discussed. The adaptive mechanism permits more rapid convergence of network training by identifying data that contain redundant information and avoiding calculations based on this redundant information.