Abstract
We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.