Abstract

Field-Programmable Gate Arrays (FPGAs) are becoming increasingly popular for implementing convolutional neural networks (CNNs) due to their low latency and high energy efficiency. In practice, a software designer first explores various CNN architectures in software to improve architecture's validation accuracy. Once an architecture is finalized, the designer must build a computation core on an FPGA for inference acceleration. The requirement of FPGA Performance, Power consumption, and Area (or resources) (PPA) is affected by many CNN model parameters, accelerator topology, and classification accuracy for a CNN implementation. However, the CNN mapping design space is enormous, and efficient mapping of CNN can quickly become a challenging task. Therefore, an exploration tool is essential for building a reconfigurable, fast, and efficient hardware accelerator. We have presented an integrated methodology for exploring FPGA-based CNN architectures by making tradeoffs between the performance, power, and area. This methodology becomes a mapping aid for software engineers who can evaluate the effect of CNN design on the implementation and performance of FPGA-based CNN accelerators.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.