Abstract
An algorithm is derived for supervised training in multilayer feedforward neural networks. Relative to the gradient descent backpropagation algorithm it appears to give both faster convergence and improved generalization, whilst preserving the system of backpropagating errors through the network. Copyright © 1996 Elsevier Science Ltd.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have