Abstract

The Deep Neural Network (DNN) already shows the superiority in many real-world applications. Nevertheless, due to the high dense neuron computing, the high-power issue is the main design challenge to implement the DNN hardware. To solve the power problem of DNN, the Spiking Neural Network (SNN) has been proposed to reduce the power consumption of the conventional numerical operations in DNNs through spike transmission. However, it is difficult to implement large-scale SNNs because of the intrinsic feature of non-differential neuron operations. In this paper, we apply the unipolar-based Stochastic Computing (SC) method to build an SNN neuron model because the SC encoding method is similar to the rate coding in SNNs. The SC-based SNN can not only improve the efficiency of calculation but reduce the SNN design barrier. In order to further improve the computing accuracy, we apply the pruning-based spike blocking method to the proposed SC-based SNN. Compared with the non-SC SNN methods, the proposed SC-based SNN can reduce system power consumption by about 81.37% to 90.58% and reduce area cost by around 72.38% to 75.64%. In addition, the proposed SC-based SNN can improve 10% computing accuracy with smaller area overhead by 64.77% and lower power consumption by 61.26% than the current SC-based SNN design.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.