Abstract

The emerging field of neuromorphic computing is offering a possible pathway for approaching the brain's computing performance and energy efficiency for cognitive applications such as pattern recognition, speech understanding, natural language processing etc. In spiking neural networks (SNNs), information is encoded as sparsely distributed spike trains, enabling learning through the spike-timing dependent plasticity (STDP) mechanism. SNNs can potentially achieve ultra-low power consumption and distributed learning due to the inherent asynchronous and sparse inter-neuron communications. Several inroads have been made in SNN implementations, however, there is still a lack of computational models that lead to hardware implementation of large scale SNN with STDP capabilities. In this work, we present a set of neuron models and neuron circuit motifs that form SNNs capable of in-hardware fully-distributed STDP learning and spiking based probabilistic inference. Functions such as efficient Bayesian inference and unsupervised Hebbian learning are demonstrated on the proposed SNN system design. A highly scalable and flexible digital hardware implementation of the neuron model is also presented. Experimental results on two different applications: unsupervised feature extraction and inference based sentence construction, have demonstrated the proposed design's effectiveness in learning and inference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call