Abstract

Convolutional neural networks (CNN) is a specialized case of artificial neural networks(ANN) and finds its application in computer vision and parallel distributed computing for processing of massive amount of data generated by sensors and to meet the power constraints of IOT devices. Recent advancements in parameter optimization, regularization techniques, improvement in activation functions, corresponding loss functions, advancements in the coted the research of Convolutonal Neural Network’s(CNN’s) in past few years. Training of neural networks is cumbersome and takes a lot of time can take days or even weeks. This limits the application of Convolutional Neural Network(CNN) in real time research fields where computational speed is of utmost importance. Thus there is a need for appropriate and enhanced computational speed to meet the requirements of these real time applications.This paper describes CNN in detail summarizes architectural evolution of CNN from 1998 to 2019. Three types of strategies have been explained to enhance the computational speed of CNN at algorithmic level and implementation level. This paper gives detailed insight about computation speed acceleration using Stochastic Gradient Decent(SGD) optimization, Fast convolution and exploiting parallelism challenges in CNN posed by these techniques and recent advancements.The paper also includes detailed view of different framework usage while implementing fast convolution or parallelism techniques. The ultimate aim of the paper to explore all such recent techniques by which we can accelerate the training speed of the CNN’s without compromising the accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call