Abstract

We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods.

Highlights

  • Unsupervised pre-training methods help to overcome difficulties encountered with current neural network based supervised algorithms

  • 0, which shows the equivalence between LeakyIntegrate-and-Fire model (LIF) neurons with constant input at infinity and the artificial neuron with rectifier activation function (ReLU). This demonstration can be generalized to local receptive fields with weight sharing, and we propose to replace the timestep computation of LIF neurons, by common GPU optimized routines of deep learning such as 2D convolutions and ReLU

  • The proposed approach is able to train lightweight convolutional architectures based on LIF neurons which can be used as a feature extractor prior to a supervised classification method

Read more

Summary

Introduction

Unsupervised pre-training methods help to overcome difficulties encountered with current neural network based supervised algorithms Such difficulties include : the requirement for a large amount of labeled data, vanishing gradients during back-propagation and the hyper-parameters tuning phase. Unsupervised learning methods have recently regained interest due to new methods such as Generative Adverserial Networks (Goodfellow et al, 2014; Salimans et al, 2016), Ladder networks (Rasmus et al, 2015), and Variational Autoencoders (Kingma and Welling, 2013) These methods reach state of the art performances, either using top layer features as inputs for a classifier or within a semi-supervised learning framework. Several works are addressing this problem by reducing the resolution of weights, activations and gradients during inference and learning phases (Stromatias et al, 2015; Esser et al, 2016; Deng et al, 2017)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call