Abstract

For more than a decade Deep Learning, a subset of machine learning have been using for many applications such as forecasting, data visualization, classification etc. However, it consumes more energy and also takes longer training periods for computation, when compared to human brain. In most cases, it is difficult to reach human level performance. With the recent technological improvements in neuroscience and thanks to neuromorphic computing, we now can achieve higher classification efficacy for producing the desired outputs with considerably lower power consumption. Latest advancements in brain simulation technologies has given a breakthrough for analysing and modelling brain functions. Despite its advancements, this research remains undiscovered due to lack of coordination between neuroscientists, electronics engineers and computer scientists. Recent progress in Spiking Neural Networks(SNN) led towards integration different fields under one single roof. Biological neurons inside human brain communicate with each other through synapses. Similarly, bio-inspired synapses in the neuromorphic model mimic the biological neuro synapses for computing. In this novel research, we have modelled a supervised Spiking Neural Network algorithm using Leaky Integrate and Fire (LIF), Izhikevich and rectified linear neurons and tested its spike latency under different conditions. Furthermore, these SNN models are tested on the MNIST dataset to classify the handwritten digits, and the results are compared with the results of the Convolutional Neural Network (CNN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call