Abstract

With the rise of big data and artificial intelligence (AI), Convolutional Neural Networks (CNNs) have become instrumental in numerous applications, from image and speech recognition to natural language processing. However, these networks computational demands often exceed the capabilities of traditional processing units, leading to a search for more effective computing platforms. This research aims to evaluate the potential of Field-Programmable Gate Array (FPGA) technology in accelerating CNN computations, considering FPGAs unique attributes such as reprogrammability, energy efficiency, and custom logic potential. The primary aim of this research is to compare the efficiency and performance of FPGA acceleration of CNNs with conventional processing units like CPUs and GPUs and to explore its potential for future AI applications. This research employs a mixed-methods approach, including an integrated literature review and comparative analysis. This paper reviews state-of-the-art research on FPGA-accelerated CNNs, benchmark performance metrics of FPGA, CPU, and GPU platforms across various CNN models, and compare FPGA-based AI applications with other real-world AI applications. The findings suggest a significant potential for FPGA-accelerated CNNs, particularly in scenarios requiring real-time computation or power-limited environments. However, challenges persist in the areas of development complexity and limited on-chip memory. Future work must focus on surmounting these barriers to unlock the full potential of FPGA-accelerated CNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call