Abstract

The massive memory accesses of feature maps (FMs) in deep neural network (DNN) processors lead to huge power consumption, which becomes a major energy bottleneck of DNN accelerators. In this article, we propose a unified framework named Transform and Entropy-based COmpression (TECO) scheme to efficiently compress FMs with various attributes in DNN inference. We explore, for the first time, the intrinsic unimodal distribution characteristic that widely exists in the frequency domain of various FMs. In addition, a well-optimized hardware-friendly coding scheme is designed, which fully utilizes this remarkable data distribution characteristic to encode and compress the frequency spectrum of different FMs. Furthermore, the information entropy theory is leveraged to develop a novel loss function for improving the compression ratio and to make a fast comparison among different compressors. Extensive experiments are performed on multiple tasks and demonstrate that the proposed TECO achieves compression ratios of 2.31 × in ResNet-50 on image classification, 3.47 × in UNet on dark image enhancement, and 3.18 × in Yolo-v4 on object detection while keeping the accuracy of these models. Compared with the upper limit of the compression ratio for original FMs, the proposed framework achieves the compression ratio improvement of 21%, 157%, and 152% on the above models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.