Abstract

Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as “Family-Engaged Execution and Learning of Induced Neuron Groups”, or FEELING.

Highlights

  • Compared to formal neural networks, spiking neural networks (SNNs) have some remarkable advantages, such as the ability to model dynamical modes of network operations and computing in continuous real time, the ability to test and use different bio-inspired local training rules (Hebb’s, Spike-Timing Dependent Plasticity (STDP), metabolic, homeostatic, etc.), significantly reduced energy consumption of SNNs realized in specific multi-core hardware (Merolla et al, 2014), and others

  • The traditional ways of SNN parameters setting are (i) the transfer of parameter values of a formal neural network to an SNN with the same architecture (Diehl et al, 2015), (ii) the adaptation of learning algorithms suitable for the formal neural networks, such as the back-propagation, to SNN (Lee et al, 2016), and (iii) the training based on the bioinspired local inter-neuron rules, such as STDP, of SNN with a biologically plausible architecture, e.g., with competition between neurons in Winner-Takes-All (WTA) networks (Diehl and Cook, 2015)

  • The first approach is appealing due to a wealth of experience accumulated in the field of formal network training, with the use of back-propagation techniques, which minimize the value of some loss function with different weight update rules (SGD Bottou, 1998, Nesterov momentum Sutskever et al, 2013, Adagrad Duchi et al, 2011, Adadelta Zeiler, 2012, Adam Kingma and Ba, 2014)

Read more

Summary

Introduction

Compared to formal neural networks, spiking neural networks (SNNs) have some remarkable advantages, such as the ability to model dynamical modes of network operations and computing in continuous real time (which is the realm of the biological prototype), the ability to test and use different bio-inspired local training rules (Hebb’s, Spike-Timing Dependent Plasticity (STDP), metabolic, homeostatic, etc.), significantly reduced energy consumption of SNNs realized in specific multi-core hardware (neurochips) (Merolla et al, 2014), and others. The traditional ways of SNN parameters setting are (i) the transfer (adaptation) of parameter values of a formal neural network to an SNN with the same (or similar) architecture (Diehl et al, 2015), (ii) the adaptation of learning algorithms suitable for the formal neural networks, such as the back-propagation, to SNN (Lee et al, 2016), and (iii) the training based on the bioinspired local inter-neuron rules, such as STDP, of SNN with a biologically plausible architecture, e.g., with competition between neurons in Winner-Takes-All (WTA) networks (Diehl and Cook, 2015). The accuracy of problem solution can be reduced, and a special technique, usually specific to the task and/or architecture used, should be applied (Diehl et al, 2015)

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.