Abstract

Convolutional neural networks (CNNs) are gaining considerable popularity in numerous computer-vision applications. A convolutional architecture for fast feature embedding (Caffe) and other general frameworks has been proposed with the development of CNN. The field-programmable gate array (FPGA) as a classical platform is used to accelerate CNNs because CNNs are computationally complex tasks. However, the implementation of CNN on FPGA platforms is difficult. The present study focuses on exploring the performance-resource design space and proposes an automatic generation model to implement the CNN reconfigurable accelerator on the FPGA platform, which uses Caffe description text as its input file. A design-space exploration model is further proposed. This model includes a layer-folding pipeline structure to balance the bandwidth requirements of convolutional and fully connected layers with incremental exploration algorithms to exploit CNN parallelism. The AlexNet, VGG-S, and VGG-16 networks are implemented. The AlexNet accelerator can achieve 593.5 GOPS, and the VGG-16 accelerator can achieve 638.9 GOPS, which is equivalent or even exceeds that of the state-of-the-art CNN accelerator for VGG-16.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call