Abstract

Custom computing architectures on field programmable gate array (FPGA) platforms are a viable solution to further accelerate convolutional neural network (CNN) inference. However, due to the large size feature map matrix data, the optimization of CNN feature maps storage computing on FPGA remains a challenge. To overcome these challenges, a FPGA-oriented memory access optimization method for CNN is proposed. Firstly, the feature map partition strategy is used to group the feature map efficiently. Second, the input and the output caching rotation methods are employed in adaptive memory access mode. Third, a caching hybrid rotation method is proposed to optimize memory access performance and can effectively reduce the access time of the CNN feature map. Experimental results based on SkyNet and VGG16 show that the inference speed of the proposed model is accelerated by 7.1 times compared with the previous conventional memory access optimization for CNN on FPGA. Through the evaluation of computational energy efficiency, our method can be improved by 6.4 times compared to the current typical accelerators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call