Abstract
In the emerging edge-computing scenarios, FPGAs have been widely adopted to accelerate convolutional neural network (CNN)–based image-processing applications, such as image classification, object detection, and image segmentation, and so on. A standard image-processing pipeline first decodes the collected compressed images from Internet of Things (IoTs) to RGB data, then feeds them into CNN engines to compute the results. Previous works mainly focus on optimizing the CNN inference parts. However, we notice that on the popular ZYNQ FPGA platforms, image decoding can also become the bottleneck due to the poor performance of embedded ARM CPUs. Even with a hardware accelerator, the decoding operations still incur considerable latency. Moreover, conventional RGB-based CNNs have too few input channels at the first layer, which can hardly utilize the high parallelism of CNN engines and greatly slows down the network inference. To overcome these problems, in this article, we propose FD-CNN, a novel CNN accelerator leveraging the partial-decoding technique to accelerate CNNs directly in the frequency domain. Specifically, we omit the most time-consuming IDCT (Inverse Discrete Cosine Transform) operations of image decoding and directly feed the DCT coefficients (i.e., the frequency data) into CNNs. By this means, the image decoder can be greatly simplified. Moreover, compared to the RGB data, frequency data has a narrower input resolution but has 64× more channels. Such an input shape is more hardware friendly than RGB data and can substantially reduce the CNN inference time. We then systematically discuss the algorithm, architecture, and command set design of FD-CNN. To deal with the irregularity of different CNN applications, we propose an image-decoding-aware design-space exploration (DSE) workflow to optimize the pipeline. We further propose an early stopping strategy to tackle the time-consuming progressive JPEG decoding. Comprehensive experiments demonstrate that FD-CNN achieves, on average, 3.24×, 4.29× throughput improvement, 2.55×, 2.54× energy reduction and 2.38×, 2.58× lower latency on ZC-706 and ZCU-102 platforms, respectively, compared to the baseline image-processing pipelines.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.