Abstract

The importance of implementing an efficient convolutional neural network (CNN) is increasing. A weight-sharing spiking CNN inference system (WS-SCNN) employing efficient convolution layers (ECLs) is proposed and modeled to enable the compact convolutional processing of the spiking neural network (SNN) inference. The proposed ECL efficiently maps convolutional features between inputs and filter weights. The ECL does not replicate the synaptic filter array with respect to input sliding, which minimizes the number of synaptic devices required to implement hardware SNNs. A four-bit weight quantization capability of a fabricated charge-trap flash synaptic device is used to verify the accurate multiplication and summation of weights in the ECL. Moreover, a nine-layer WS-SCNN consisting of multiple ECLs is modeled, and the benefits of the WS-SCNN in terms of the area and energy are evaluated. Simulation results show that the WS-SCNN has 5.68 and 103.5 times higher energy and area efficiency than conventional SCNN systems, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.