Abstract

Spiking neural networks (SNNs) have shown great potential as a solution for realizing ultralow-power consumption on neuromorphic hardware, but obtaining deep SNNs is still a challenging problem. Existing network conversion methods can effectively obtain SNNs from the trained convolutional neural networks (CNNs) with little performance loss, however, high-precision weights in the converted SNNs would take up high-storage space nonamicable to limited memory resources. To tackle this problem, we analyze the relationship between weights and thresholds of spiking neurons and propose an efficient weights-threshold balance conversion method to obtain SNNs with binary weights, resulting in a significant memory storage reduction. The experimental results evaluated with various network structures on benchmark data sets show that the binary SNN not only needs much less memory resources compared to its high-precision counterpart but also achieves the high-recognition accuracy comparable to other state-of-the-art SNNs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.