Abstract

In this paper we present a Competitive Rate-Based Algorithm (CRBA) that approximates operation of a Competitive Spiking Neural Network (CSNN). CRBA is based on modeling of the competition between neurons during a sample presentation, which can be reduced to ranking of the neurons based on a dot product operation and the use of a discrete Expectation Maximization algorithm; the latter is equivalent to the spike time-dependent plasticity rule. CRBA's performance is compared with that of CSNN on the MNIST and Fashion-MNIST datasets. The results show that CRBA performs on par with CSNN, while using three orders of magnitude less computational time. Importantly, we show that the weights and firing thresholds learned by CRBA can be used to initialize CSNN's parameters that results in its much more efficient operation.

Highlights

  • A Competitive Spiking Neural Network (CSNN), is a two-layer feedforward spiking network with lateral inhibitory connections (Querlioz et al, 2011; Diehl and Cook, 2015; Cachi et al, 2020), that uses spiking neurons with local Konorski/Hebb learning rule to implement a dynamic temporal network that exhibits properties often missing in deep learning models

  • Two main problems limit usage of CSNN: its very slow learning and testing time, and the difficulty of making sense of its dynamic mechanisms. The latter problem is due to the fact that while it is not easy to analyze just one dynamic mechanism, CSNN uses three types of dynamic mechanisms in its operation: the spike generating process, the adaptable firing threshold, and the spike time-dependent plasticity (STDP) learning rule

  • The performance of Competitive Rate-Based Algorithm (CRBA) is tested on the MNIST and Fashion-MNIST datasets

Read more

Summary

Introduction

A Competitive Spiking Neural Network (CSNN), is a two-layer feedforward spiking network with lateral inhibitory connections (Querlioz et al, 2011; Diehl and Cook, 2015; Cachi et al, 2020), that uses spiking neurons with local Konorski/Hebb learning rule to implement a dynamic temporal network that exhibits properties often missing in deep learning models They are pattern selectivity (spiking neurons learn to detect specific input patterns) (Masquelier and Thorpe, 2007; Nessler et al, 2009; Lobov et al, 2020), short-/long- term memory (spiking neurons use self-regulatory mechanism that processes information in different time scales) (Ermentrout, 1998; Brette and Gerstner, 2005; Pfister and Gerstner, 2006; Zenke et al, 2015), synaptic plasticity (based on local learning first observed by Konorski and by Hebb) (Konorski, 1948; Hebb, 1949), modularity (spiking neurons operate and learn independently) (Zylberberg et al, 2011; Diehl and Cook, 2015), adaptability, and continuous learning (Brette and Gerstner, 2005; Wysoski et al, 2008).

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call