Abstract
Deep neural networks such as convolutional neural networks (CNNs) have achieved a great success in a broad range of fields. Spiking neural networks (SNNs) are designed as a solution for realizing ultra-low power consumption using spikebased neuromorphic hardware. To enable the mapping between conventional artificial neural networks (ANNs) and spike-based neuromorphic hardware, direct conversion from conventional ANNs into SNNs has recently been proposed. However, performance loss is hard to avoid after conversion. To reduce the loss, we analyze the encoding methods of SNNs and the optimization methods for converting. We use the rate coding and approximate the output of an activation function of CNNs by the number of spikes produced by a neuron within a given time window in SNNs. We propose a method for generating fixed uniform spike train whose number of spikes is as exact as we expect, and present an optimization method to reduce the loss by rescaling the threshold of each layer in SNNs. For evaluation, we train three different CNNs with MNIST and SVHN datasets separately and convert all of them into SNNs. The networks achieve a max accuracy of 99.17% on MNIST and of 93.36% on additional training dataset of SVHN. The results show the proposed fixed and uniform spike train not only outperforms the Poisson distributed spike train but also costs less time to generate. Our threshold rescaling method greatly improves the performance of SNNs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.