Abstract
Spiking neural networks (SNNs) are attractive for energy-constrained use-cases due to their binarized activation, eliminating the need for weight multiplication. However, its lag in accuracy compared to traditional convolutional network networks (CNNs) has limited its deployment. In this paper, we propose CQ+ training (extended "clamped" and "quantized" training), an SNN-compatible CNN training algorithm that achieves state-of-the-art accuracy for both CIFAR-10 and CIFAR-100 datasets. Using a 7-layer modified VGG model (VGG-*), we achieved 95.06% accuracy on the CIFAR-10 dataset for equivalent SNNs. The accuracy drop from converting the CNN solution to an SNN is only 0.09% when using a time step of 600. To reduce the latency, we propose a parameterized input encoding method and a threshold training method, which further reduces the time window size to 64 while still achieving an accuracy of 94.09%. For the CIFAR-100 dataset, we achieved an accuracy of 77.27% using the same VGG-* structure and a time window of 500. We also demonstrate the transformation of popular CNNs, including ResNet (basic, bottleneck, and shortcut block), MobileNet v1/2, and Densenet, to SNNs with near-zero conversion accuracy loss and a time window size smaller than 60. The framework was developed in PyTorch and is publicly available.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.