Abstract

Backpropagation, which is frequently used in Neural Network training, often takes a great deal of time to converge on an acceptable solution. Momentum is a standard technique that is used to speed up convergence and maintain generalization performance. In this paper we present the Windowed Momentum algorithm, which increases speedup over Standard Momentum. Windowed Momentum is designed to use a fixed width history of recent weight updates for each connection in a neural network. By using this additional information, Windowed Momentum gives significant speedup over a set of applications with same or improved accuracy. Windowed Momentum achieved an average speedup of 32% in convergence time on 15 data sets, including a large OCR data set with over 500,000 samples. In addition to this speedup, we present the consequences of sample presentation order. We show that Windowed Momentum is able to overcome these effects that can occur with poor presentation order and still maintain its speedup advantages.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.