Abstract

Classical Artificial Neural Networks (ANNs) though well exploited in solving classification problems, do not model perfectly the information encoding process in the human brain because ANNs encode information using rate-based coding. However, biological neurons in the brain are known to encode information using temporal coding. In order to mimic the biological method of encoding information, various Spiking Neural Network (SNN) models have been developed. However, some of these models are limited in the number of spikes and do not leverage well on some classification problems. In order to address some of the inherent challenges associated with SNN, a multi-layer learning model for a multi-spiking network is proposed in this paper. The model exploits the temporal coding of spikes and the least-squares method to derive a weight update scheme. It also employs a spike locality concept in order to determine how the synaptic weights are to be adjusted at a particular spike time so as to minimize the learning interference, and thereby, increasing the number of spikes for learning. The performance of the model is evaluated on benchmarked classification datasets. A correlation-based metric is combined with a threshold concept to measure the classification accuracy of the model. The experimental results showed that the proposed model achieved better classification accuracy than some state-of-the-art multi-layer SNN learning models.

Highlights

  • Artificial Neural Network (ANN) is one of the main learning algorithms in machine learning, in supervised learning with its variants such as Spiking Neural Networks (SNN) [1], Deep Neural Network (DNN) [2] with its various sub-divisions, Growing and Pruning Learning algorithm for Deep Neural Networks (GP-DLNN) [3] et cetera have been used to obtain state-of-the-art results in various fields of application

  • The main aim of the proposed weight update scheme is to establish a system of equations in terms of an output spike time, td ; given a set of input spike times ti={1,2,3,...,I}, where I is the number of input neurons with spikes within the time locality of td ; and the corresponding synaptic weights, wji connecting to a given output neuron, j, (j = {1, 2, . . . J } where J is the number of output neurons in the network)

  • To assess the influence the number of desired output spikes and class labels have on the classification performance of the proposed model, the mean training and testing accuracies on seven benchmarked datasets trained with desired output spike train size, tdn of 1, 2, 3, 4, and 5 per class label are presented

Read more

Summary

Introduction

Artificial Neural Network (ANN) is one of the main learning algorithms in machine learning, in supervised learning with its variants such as Spiking Neural Networks (SNN) [1], Deep Neural Network (DNN) [2] with its various sub-divisions, Growing and Pruning Learning algorithm for Deep Neural Networks (GP-DLNN) [3] et cetera have been used to obtain state-of-the-art results in various fields of application. ANN learning techniques formulated using temporal coding of information are referred to as Spiking Neural Networks (SNN) and classified as the third generation of ANN [1]. Unlike rate-based learning methods that largely use sigmoid and Radial Based functions, neural activities in this generation of ANN are modelled using more biologically plausible neural models such as the HodgkinHuxley (HH) model [9], Integrate-and-Fire (IF) models [4], [10], Izhikerich’s model [4], [11], and Spike Response Model (SRM) [4], [10]. Supervised learning in the second generation of ANN using rate-based encoding is well established, data presentation in SNN using the temporal coding scheme makes it impossible to directly apply supervised rate-based learning methods to train SNN. As a result and taking into consideration the prospects

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call