Abstract

In this paper, we present a novel early termination based training acceleration technique for temporal coding based spiking neural network (SNN) processor design. The proposed early termination scheme can efficiently identify the non-contributing training images during the training's feedforward process, and it skips the rest of the processes to save training energy and time. A metric to evaluate each input image's contribution to training has been developed, and it is compared with pre-determined threshold to decide whether to skip the rest of the training process. For the threshold selection, an adaptive threshold calculation method is presented to increase the computation skip ratio without sacrificing accuracy. Timestep splitting approach is also employed to allow more frequent early termination in split timesteps, thus leading to more computation savings. The proposed early termination and timestep splitting techniques achieve 51.21/42.31/93.53/30.36% reduction of synaptic operations and 86.06/64.63/90.82/49.14% reduction of feedforward timestep for the training process on MNIST/Fashion-MNIST/ETH-80/EMNIST-Letters dataset, respectively. The hardware implementation of the proposed SNN processor using 28 nm CMOS process shows that the SNN processor achieves the training energy saving of 61.76/31.88% and computation cycle reduction of 69.10/36.26% on MNIST/Fashion-MNIST dataset, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.