Abstract

Standard backpropagation, as with many gradient based optimization methods converges slowly as neural networks training problems become larger and more complex. In this paper, we present a new algorithm, dynamic adaptation of the learning rate to accelerate steepest descent. The underlying idea is to partition the iteration number domain into n intervals and a suitable value for the learning rate is assigned for each respective iteration interval. We present a derivation of the new algorithm and test the algorithm on several classification problems. As compared to standard backpropagation, the convergence rate can be improved immensely with only a minimal increase in the complexity of each iteration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.