Abstract

In convolutional neural networks (CNNs), convolutional layers consume dominant portion of computation energy due to large amount of multiply-accumulate operations (MACs). However, those MACs become meaningless (zeroes) after rectified linear unit when the convolution results become negative. In this paper, we present an efficient approach to predict and skip the convolutions generating zero outputs. The proposed two-step zero prediction approach, called mosaic CNN, can be effectively used for trading off classification accuracy for computation energy in CNN. In the mosaic CNN, the outputs of each convolutional layer are computed considering their spatial surroundings in an output feature map. Here, the types of spatial surroundings (mosaic types) can be selected to save computation energy at the expense of accuracy. In order to further save the computations, we also propose a most significant bits (MSBs) only computation scheme, where a constant value representing least significant bits compensates the MSBs only computations. The CNN accelerator supporting the combined two approaches has been implemented using the 65-nm CMOS process. The numerical results show that compared with the state-of-art processor, the proposed reconfigurable accelerator can achieve energy savings ranging from 16.99% to 29.64% for VGG-16 without seriously compromising the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call