Abstract
A method for self-determination of adaptive learning rates during behaviour is presented for back propagation in simulated neural networks. The inherent limitations of a learning rate fixed a priori for overshooting the goal are analysed and an optimum step length based on an adaptive learning rate is established. The use of a self-determined learning rate in this way allows first-time learning to become more feasible. A height and gradient based algorithm for determining the learning rate is described together with its computational expense. Experimental results are given comparing the new method with standard back propagation both with and without momentum-like augmentation. The results yield training times for the new method over a single trial of a similar order to those of the best fixed learning rates found empirically over multiple trials with potential for further improvement.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.