Abstract

Deep learning workloads, such as convolutional neural networks (CNNs) are important due to increasingly demanding high-performance hardware acceleration. One distinguishing feature of a deep learning workload is that it is inherently resilient to small numerical errors and thus works very well with low precision hardware. We propose a novel method called double multiply-and-accumulate (MAC) to theoretically double the computation rate of CNN accelerators by packing two MAC operations into one digital signal processing block of off-the-shelf field-programmable gate arrays (FPGAs). We overcame several technical challenges by exploiting the mode of operation in the CNN accelerator. We have validated our method through FPGA synthesis and Verilog simulation, and evaluated our method by applying it to the state-of-the-art CNN accelerator. The double MAC approach used can double the computation throughput of a CNN layer. On the network level (all convolution layers combined), the performance improvement varies depending on the CNN application and FPGA size, from 14% to more than 80% over a highly optimized state-of-the-art accelerator solution, without sacrificing the output quality significantly.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.