Abstract

This paper presents a configurable convolutional neural network accelerator (CNNA) for a system-on-chip (SoC). The goal was to accelerate inference in different deep learning networks on an embedded SoC platform. The presented CNNA has a scalable architecture that uses high-level synthesis (HLS) and SystemC for the hardware accelerator. It can accelerate any convolutional neural network (CNN) exported from Keras in Python and supports a combination of convolutional, max-pooling, and fully connected layers. A training method with fixed-point quantised weights is proposed and presented in the paper. The CNNA is template-based, enabling it to scale for different targets of the Xilinx Zynq platform. This approach enables design space exploration, which makes it possible to explore several configurations of the CNNA during C and RTL simulation, fitting it to the desired platform and model. The CNN VGG16 was used to test the solution on a Xilinx Ultra96 board using productivity for Zynq (PYNQ). The result gave a high level of accuracy in training with an autoscaled fixed-point Q2.14 format compared to a similar floating-point model. It was able to perform inference in 2.0 s while having an average power consumption of 2.63 W, which corresponds to a power efficiency of 6.0 GOPS/W.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call