Abstract

Well-programmed Field Programmable Gate Arrays (FPGAs) can accelerate Deep Neural Network (DNN) with high power efficiency. The dominant workloads of DNNs are Multiply Accumulates (MACs), which can be directly mapped to Digital Signal Processors (DSPs) in the FPGA. A DNN accelerator pursuing high performance can consume almost all the DSPs, but with a considerable amount of Look-up Tables (LUTs) in the FPGA unused or performing MACs inefficiently. To solve this problem, we present a Bit-Shift method for FPGA-based DNN accelerator to fully utilize the resources in the FPGA. The MAC is converted to a limited number of shift-and-add operations, which can be implemented by LUTs with significant improvement of efficiency. A quantization method based on Minimum Mean Absolute Error (MMAE) is proposed to preserve the accuracy of the DNN inference in the conversion of DNN parameters without re-training. The quantized parameters can be compressed to a fixed and fewer number of bits to reduce the memory bandwidth. Accordingly, a Bit-Shift architecture is designed to load the compressed parameters and perform the converted MAC calculations without extra decompression module. A large scale DNN accelerator with the proposed Bit-Shift architecture is implemented in a Xilinx VU095 FPGA. Experimental results show that the proposed method can boost the processing speed by 32% and reach 331 GOPS at 190MHz clock frequency for ResNet-34.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.