Abstract

In recent years, convolutional neural networks (CNNs) have achieved state-of-the-art results for many computer vision tasks. However, the traditional CNNs are computational-intensive and memory-intensive, hence they are unsuitable for the application in mobile edge computing scenarios with limited computing resources and low power consumption. The depthwise separable CNNs can significantly reduce the number of model parameters and improve the calculation speed, so it is naturally suitable for mobile edge computing applications. In this paper, we propose a Field Programmable Gate Array (FPGA)-based depthwise separable CNN accelerator with all the layers working concurrently in a pipelined fashion to improve the system throughput and performance. To implement the accelerator, we present a custom computing engine architecture to handle the dataflow between adjacent layers by using double-buffering-based memory channels. Besides, in fully connected layers, data titling technique is adopted to divide matrix multiplication from large dimension into small matrix. Finally, our proposed accelerator for depthwise separable CNN has been implemented and evaluated on Intel Arria 10 FPGA. The results of experiment indicate that the proposed depthwise separable CNN accelerator has a performance of 98.9 GOP/s and achieve up to 17.6× speed up and 29.4× low power than CPU and GPU implementations respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.