Abstract

Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

Highlights

  • Machine learning algorithms based on stochastic neural network models such as Restricted Boltzmann Machines (RBMs) and deep networks are currently the stateof-the-art in several practical tasks (Hinton and Salakhutdinov, 2006; Bengio, 2009)

  • Neuromorphic systems are promising alternatives for largescale implementations of RBMs and deep networks, but the common procedure used to train such networks, Contrastive Divergence (CD), involves iterative, discrete-time updates that do not straightforwardly map on a neural substrate. We solve this problem in the context of the RBM with a spiking neural network model that uses the recurrent network dynamics to compute these updates in a continuous-time fashion

  • We argue that the recurrent activity coupled with Spike Time Dependent Plasticity (STDP) dynamics implements an eventdriven variant of CD

Read more

Summary

Introduction

Machine learning algorithms based on stochastic neural network models such as RBMs and deep networks are currently the stateof-the-art in several practical tasks (Hinton and Salakhutdinov, 2006; Bengio, 2009) The training of these models requires significant computational resources, and is often carried out using power-hungry hardware such as large clusters (Le et al, 2011) or graphics processing units (Bergstra et al, 2010). Their implementation in dedicated hardware platforms can be very appealing from the perspectives of power dissipation and of scalability. The communication between neuromorphic components is often mediated using asynchronous addressevents (Deiss et al, 1998) enabling them to be interfaced with event-based sensors (Liu and Delbruck, 2010; Neftci et al, 2013; O’Connor et al, 2013) for embedded applications, and to be implemented in a very scalable fashion (Silver et al, 2007; Joshi et al, 2010; Schemmel et al, 2010)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.