Abstract

Stochastic computing (SC) has been recognized as an efficient technique to reduce the hardware consumption of a convolution neural network (CNN) accelerator. An SC-CNN needs a long SC sequence length to produce accurate results, which leads to a low throughput. In order to achieve better accuracy and higher throughput, highly parallelized SC-CNNs based on Sobol sequences have been extensively used. However, high parallelism leads to undesirable hardware overhead. To solve this problem, this paper proposes Pseudo-Sobol sequences and accordingly develops an efficient parallel computation-conversion hybrid convolution architecture, which fuses the SC-computation units and S2B units. With negligible accuracy loss, the proposed architecture can increase energy and area efficiency by 41% and 36%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.