Abstract

Spiking neural networks (SNNs) that enable greater computational efficiency on neuromorphic hardware have attracted attention. Existing ANN-SNN conversion methods can effectively convert the weights to SNNs from a pre-trained ANN model. However, the state-of-the-art ANN-SNN conversion methods suffer from accuracy loss and high inference latency due to ineffective conversion methods. To solve this problem, we train low-latency SNN through knowledge distillation with Kullback-Leibler divergence (KL divergence). We achieve superior accuracy on CIFAR-100, 74.42% for VGG16 architecture with 5 timesteps. To our best knowledge, our work performs the fastest inference without accuracy loss compared to other state-of-the-art SNN models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.