Abstract

We elucidate the practical implementation of Spiking Neural Network (SNN) as local ensembles of classifiers. Synaptic time constantτsis used as learning parameter in representing the variations learned from a set of training data at classifier level. This classifier uses coincidence detection (CD) strategy trained in supervised manner using a novel supervised learning method calledτsPrediction which adjusts the precise timing of output spikes towards the desired spike timing through iterative adaptation ofτs. This paper also discusses the approximation of spike timing in Spike Response Model (SRM) for the purpose of coincidence detection. This process significantly speeds up the whole process of learning and classification. Performance evaluations with face datasets such as AR, FERET, JAFFE, and CK+ datasets show that the proposed method delivers better face classification performance than the network trained with Supervised Synaptic-Time Dependent Plasticity (STDP). We also found that the proposed method delivers better classification accuracy thanknearest neighbor, ensembles ofkNN, and Support Vector Machines. Evaluation on several types of spike codings also reveals that latency coding delivers the best result for face classification as well as for classification of other multivariate datasets.

Highlights

  • Donald Hebb first proposed that if the synapses between two neurons effectively cooperate in an activity the synaptic efficacy of the synapse would be strengthened

  • Using Principal Component Analysis (PCA) and Gabor features, we assess the performance of the proposed coincidence detection (CD) classifier against several types of classifiers, namely, k nearest neighbor classifier, ensembles of kNN classifier [30], Support Vector Machine (SVM), and ensembles of SVM classifiers

  • For AR, JAFFE, and FERET datasets, the trained CD classifier acquired in Section 5.2 was used, while, for CK+ dataset, 123 images at the beginning of first sequence are used as gallery while 577 peak images from each sequence are used as probe in training

Read more

Summary

Introduction

Donald Hebb first proposed that if the synapses between two neurons effectively cooperate in an activity the synaptic efficacy of the synapse would be strengthened. Since the cooperativeness between these neurons would be more effective if it happens within a specific period of time, the idea of “Hebbian Plasticity” could be considered as a form of coincidence detection or neuronal synchronization between the inputs of the two neurons. We discuss two ways of learning and classification by coincidence detections, namely, (1) learning by weight adaptation in the form of Supervised STDP and (2) learning by synaptic time constant adaptation in the form of a novel approach called τs Prediction. These two strategies are both based on Hebbian plasticity but their implementations are quite different. Supervised learning rules are used to form the necessary synaptic weights or synaptic time constant that represent the training data and the trained network is used for classification

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call