Abstract

Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain.

Highlights

  • Spiking Neural Networks (SNNs) are a significant shift from the standard way of operation of Artificial Neural Networks (Farabet et al, 2012)

  • We propose a weightnormalization technique that balances the threshold of each layer by considering the actual operation of the SNN in the loop during the Analog Neural Networks (ANNs)-SNN conversion process

  • Residual Network (ResNet)-20 configuration outlined in He et al (2016a) is used for the CIFAR-10 dataset while ResNet-34 is used for experiments on the ImageNet dataset

Read more

Summary

Introduction

Spiking Neural Networks (SNNs) are a significant shift from the standard way of operation of Artificial Neural Networks (Farabet et al, 2012). Most of the success of deep learning models of neural networks in complex pattern recognition tasks are based on neural units that receive, process and transmit analog information. Such Analog Neural Networks (ANNs), disregard the fact that the biological neurons in the brain (the computing framework after which it is inspired) processes binary spike-based information. Driven by this observation, the past few years have witnessed significant progress in the modeling and formulation of training schemes for SNNs as a new computing paradigm that can potentially replace ANNs as the generation of Neural Networks. This has necessitated a rethinking of training mechanisms for SNNs

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call