Abstract

SummaryConvolutional neural networks (CNNs) have shown remarkable advantages in a wide range of domains at the expense of huge parameters and computations. Modern CNNs still tend to be more complex and larger to achieve better inference accuracy. However, the complex and large structures of CNNs could slow down the inference speed. Recently, Compressing the convolutional weights to be sparse by pruning the unimportant parameters has been demonstrated as an efficient way to reduce the computations of CNNs. On the other hand, field‐programmable gate arrays (FPGAs) have been a popular hardware platform to accelerate CNN inference. In this paper, we propose an algorithm/hardware co‐optimized method for accelerating CNN inference on FPGAs. For the algorithm, we take advantage of unstructured and structured parameter sparsifying methods to achieve high sparsity and keep the regularity of convolutional weights. Correspondingly, hardware‐friendly index representations of sparse convolutional weights are proposed. For the hardware architecture, we propose row‐wise input‐stationary dataflow, which is tightly coupled with the algorithm. A row‐wise computing engine (RConv Engine) is proposed, which is based on the dataflow. Inside the RConv Engine, the scalar‐vector structure is applied to implement the basic processing elements (PEs). To flexibly calculate the feature map with various sizes, the PEs are organized in a 2D structure with two work modes. The experimental results demonstrate that our co‐optimized method implements high sparsity of convolutional weights, and the computing engine achieves high computation efficiency. Compared with other accelerators, our co‐optimized method implements a 10.9 speedup on FPS at most with the highest sparsity of convolutional weights and negligible accuracy loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call