Abstract

In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous works, in this paper, we point out the similarity of activation values: (1) in the same layer of a CNN model, most feature maps are either highly dense or highly sparse; (2) in the same layer of a CNN model, feature maps in different channels are often similar. Based on the two observations, we propose a block-based compression approach, which utilizes both the sparsity and the similarity of activation values to further reduce the data volume. Moreover, we also design an encoder, a decoder and an indexing module to support the proposed approach. The encoder is used to translate output activations into the proposed block-based compression format, while both the decoder and the indexing module are used to align nonzero values for effectual computations. Compared with previous works, benchmark data consistently show that the proposed approach can greatly reduce both memory traffic and power consumption.

Highlights

  • Nowadays, convolutional neural networks (CNNs) [1,2] have been widely used in many application fields, such as computer vision [3,4], signal processing [5,6] and image processing [7,8]

  • No matter if a CNN is sparse or not, the compression format cannot be directly applied to the SIMD architecture; otherwise, irregularly distributed nonzero values will break the alignment of input activations and kernel weights

  • This paper demonstrates both the sparsity and the similarity of feature maps

Read more

Summary

Introduction

Convolutional neural networks (CNNs) [1,2] have been widely used in many application fields, such as computer vision [3,4], signal processing [5,6] and image processing [7,8]. Previous works [9,23,26,27] exploit the sparsity to reduce both memory traffic and power consumption Different from these previous works, in this paper, we point out the similarity of activation values. Owing to the similarity of feature maps, we can try to consider multiple channels at the same time for compression Based on these two observations, we are motivated to utilize the similarity of activation values to further reduce the data volume. The encoder is used to translate output activations into the proposed block-based compression format Both the decoder and the indexing module are used to align nonzero activation values for effectual multiplications.

Related Works
Proposed Approach
Experiment Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call