Abstract

Abstract: Deep Neural Networks (DNNs) are one of the leading classification algorithms. Deep Learning has achieved remarkable milestones such as Google-Net and Alpha-Go. They have shown promising results in pattern recognition for images and text, language translation, sound recognition, and many more. DNN has been widely accepted and also employed for pattern matching and image recognition. All these applications are possible as these networks emulate functioning of human brain and hence name “Neural” Network.However, to provide competent results, these millions of neurons in neural networks needs to be trained. For which billions of operations are to be carried out. Training of these many neurons and with these many operations is a time-consuming affair. Hence choice of network and its parameter play an important role both in providing accurate trained network and time taken for training. If the network is deep and has plethora of neurons, the time taken is considerably high as training works sequentially on batches of dataset using Sequential Back-propagation Algorithm.To accelerate the training there are many hardware solutions like use of GPU, FPGA and ASICs. However because of popularity of DNN there is increase in demand in mobile and IoT platform devices. These are resource constrained devices, where power and size of these device, restricts usage and implementation of deep NN (Neural Network).Simulation of DAPP is done on MNIST and CIFAR-10 datasets using System-C. Additionally, this technique has been adapted for multi-core architectures. The design shows a reduction in time by 38% for 3 layers of CNN and 92% for 10 layers of CNN, while maintaining the accuracy of networks. This generic methodology has been implemented for Vanilla RNN and LSTM networks. An improvement of 38% for Vanilla RNN and 40% for LSTM has been demonstrated by this methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call