Abstract
Based on various approaches, several different learing algorithms have been given in the literature for neural networks. Almost all algorithms have constant learning rates or constant accelerative parameters, though they have been shown to be effective for some practical applications. The learning procedure of neural networks can be regarded as a problem of estimating (or identifying) constant parameters (i.e. connection weights of network) with a nonlinear or linear observation equation. Making use of the Kalman filtering, we derive a new back-propagation algorithm whose learning rate is computed by a time-varying Riccati difference equation. Perceptron-like and correlational learning algorithms are also obtained as special cases. Furthermore, a self-organising algorithm of feature maps is constructed within a similar framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.