Abstract

Inference engines (IEs) based on convolutional neural networks (CNN) are memory intensive and computationally complex. The IEs require optimum data format for representing kernel weights and feature maps (FMs) to reduce the computational complexity of the convolution operator (CO). The proposed CO implements multiplications in floating-point and additions in fixed-point, considering the implementation aspects and loss-of-precision. Here, the optimal data format is decided through MATLAB-based range and precision analysis. To accomplish this image-based models such as AlexNet, VGG-16, and VGG-19 are considered with single-precision floating-point representation (SPFP) as reference representation. Analysis reveals that half-precision floating-point (HPFP) and 16-bit fixed-point (10-bit integer, 6-bit fraction) representations are required for kernel weights and feature maps respectively. A 16-bit Fix/Float 2×1 CO is designed. A trade-off analysis of the CO with proposed data format, 16-bit fixed-point, SPFP, and HPFP is performed. This CO has worst-case accuracy as close as 97 percent with SPFP. The proposed 2×1 CO is implemented with a multiplication operation processing unit (MOPU) in place of the shifter/barrel-shifter unit. ASIC implementation of CO requires a 22 percent lesser area, 17.98 percent lesser power than HPFP with 250MHz clock. Also, the speed of 750MOPS and hardware efficiency of 24.22TOPS/W are achieved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.