Abstract

Convolution is inarguably the most complex operation utilized in Convolutional Neural Networks (convnets). Owing to the billions of independent multiply-adds involved, convolution is being massively parallelized by the simultaneous utilization of many cores of Graphical Processing Units (GPUs). Although GPUs have shown significant performance improvements in both training and inference stages, they are not well-suited for mobile vision applications where both energy and real-time constraints need to be satisfied. In contrast, Field Programmable Gate Arrays (FPGAs) have demonstrated massive parallelization capabilities, with fast DSPs and on-chip memory, at a lower energy cost than GPUs. Hence, they are being utilized to design convnet accelerators for embedded applications. In this brief, we design an FPGA-based accelerator for general matrix-matrix multiplication (GeMM) to improve the efficiency of convolutional layers of Shufflenet, an efficient convnet architecture. Experimental results show significant performance improvements against the state-of-the-art FPGA-based implementations of both efficient convnets that are tailored towards mobile vision applications, and complex convnets that are used in traditional applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call