Abstract

Deep learning has evolved as a discipline that has demonstrated its capacity and usefulness in tackling complicated learning issues as a result of recent improvements in digital technology and the availability of authentic data. Convolutional neural networks (CNNs) in particular have demonstrated their usefulness in image processing and computer vision applications. They do, however, need heavy CPU operations and memory bandwidth, which prevents general-purpose CPUs from attaining desirable performance levels. To boost CNN throughput, hardware accelerators such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and graphics processing units (GPUs) have been deployed. FPGAs, in particular, have lately been used to accelerate the development of deep learning networks due to their ability to optimize parallelism and power efficiency. Based on hardware-software architecture, this research provides a CNN acceleration model for video compression applications. Vivado High Level Synthesis is used to accelerate the CNN model in order to develop Intellectual Property (IP) cores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call