Abstract

Surges during training process are a major obstacle in training a Spiking Neural Network (SNN) using Spike-Prop algorithm and its derivatives [1]. In this paper, we perform weight convergence analysis to understand the proper step size during SpikeProp learning and hence avoid surges during the training process. Using the results of weight convergence analysis, we propose an optimum adaptive learning rate in each iteration which will yield suitable step size within the bounds of convergence condition. The performance of this adaptive learning rate is compared with existing methods via several simulations. It is observed that the use of adaptive learning rate significantly increases the success rate of SpikeProp algorithm along with significant improvement in speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.