Abstract

Deep neural networks are biologically inspired class of algorithms that have recently demonstrated the state-of-the-art accuracy in large-scale classification and recognition tasks. Hardware acceleration of deep networks is of paramount importance to ensure their ubiquitous presence in future computing platforms. Indeed, a major landmark that enables efficient hardware accelerators for deep networks is the recent advances from the machine learning community that have demonstrated the viability of aggressively scaled deep binary networks. In this paper, we demonstrate how deep binary networks can be accelerated in modified von Neumann machines by enabling binary convolutions within the static random access memory (SRAM) arrays. In general, binary convolutions consist of bit-wise exclusive-NOR (XNOR) operations followed by a population count (popcount). We present two proposals: one based on charge sharing approach to perform vector XNOR and approximate popcount and another based on bit-wise XNOR followed by a digital bit-tree adder for accurate popcount. We highlight the various tradeoffs in terms of circuit complexity, speed-up, and classification accuracy for both the approaches. Few key techniques presented as a part of the manuscript are the use of low-precision, low-overhead analog-to-digital converter (ADC), to achieve a fairly accurate popcount for the charge-sharing scheme and proposal for sectioning of the SRAM array by adding switches onto the read-bitlines, thereby achieving improved parallelism. Our results on benchmark image classification datasets for CIFAR-10 and SVHN on a binarized neural network architecture show energy improvements of up to $6.1\times $ and $2.3\times $ for the two proposals, compared to conventional SRAM banks. In terms of latency, improvements of up to $15.8\times $ and $8.1\times $ were achieved for the two respective proposals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.