Abstract

Convolutional neural networks is one of the most popular method in recent times to solve the computer vision and image processing applications. CNN has become more intensive as its computation increases day by day and it needs more dedicated hardware for an effective implementation. In recent times, graphics processing unit (GPU) and field programmable gate arrays (FPGA) are finding its high probability for the research in terms of low complexity execution and implementation of CNN. FPGA outperforms GPU in terms of its flexible architecture and high performance with less energy consumption. Hence FPGA finds more suitable for an effective implementation of CNN. How ever, optimization is required for an accelerator designs for CNN to accommodate more computations. One of the real dark side of the accelerator design is to perform the addition of intermediate results obtained during the convolution. Therefore multi operand adders are needed to employ for convolution operation but consumes the more area which in turn reduces the performance of the system. Hence the paper proposes the new cognitive Wallace compressor adder structures which is used for optimization in the adder layers of the convolutional neural networks (CNN). The proposed adders replaces the traditional binary tree adders for the CNN accelerator design. Also the paper provides the insight view of experimentation in ARTIX-7 EDGE FPGA and compared with the existing adders in which power consumption is reduced to 20–25% and area utilization has reduced to 30% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call