Convolutional neural networks (CNNs) outperform traditional machine learning algorithms across a wide range of applications, such as object recognition, image segmentation, and autonomous driving. However, their ever-growing computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring various dataflow styles and designs that exploit computational parallelism. However, potential performance improvement (speedup) from sparsity has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference phase since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially during training. The performance bottleneck arises from the fact that the memory cannot feed the computational units enough data, resulting in idling of these computational units and thus low utilization ratios. In this article, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activations and weights. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially during training, SPRING uses an efficient monolithic 3D nonvolatile memory interface to increase memory bandwidth. Compared to Nvidia GeForce GTX 1080 Ti, SPRING achieves 15.6×, 4.2×, and 66.0× improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5×, 4.5×, and 69.1× improvements in performance, power reduction, and energy efficiency, respectively, for inference.
Read full abstract