Abstract

Spiking neural networks are efficient computation models for low-power environments. Spike-based BP algorithms and ANN-to-SNN (ANN2SNN) conversions are successful techniques for SNN training. Nevertheless, the spike-base BP training is slow and requires large memory costs, while ANN2SNN needs many inference steps to obtain good performance. In this paper, we propose an Activation Consistency Coupled ANN-SNN (AC2AS) framework to train the SNN in a fast and memory-efficient way. The AC2AS consists of two components: (a) a weight-shared architecture between ANN and SNN and (b) spiking mapping units. Firstly, the architecture trains the weight-shared parameters on the ANN branch, resulting in fast training and low memory costs for SNN. Secondly, the spiking mapping units are designed to ensure that the activation values of the ANN are the spiking features. As a result, the activation consistency is guaranteed, and the classification error of the SNN can be optimized by training the ANN branch. Besides, we design an adaptive threshold adjustment (ATA) algorithm to decrease the firing of noisy spikes. Experiment results show that our AC2AS-based models perform well on the benchmark datasets (CIFAR10, CIFAR100, and Tiny-ImageNet). Moreover, the AC2AS achieves comparable accuracy under 0.625× time steps, 0.377× training time, 0.27× GPU memory costs, and 0.33× spike activities of the Spike-based BP model. The code is available at https://github.com/TJXTT/AC2ASNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call