Abstract

Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient. Unlike the non-spiking counterparts, most of the existing SNN simulation frameworks are not practically efficient enough for large-scale AI tasks. In this paper, we introduce SpykeTorch, an open-source high-speed simulation framework based on PyTorch. This framework simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme. In terms of learning rules, both spike-timing-dependent plasticity (STDP) and reward-modulated STDP (R-STDP) are implemented, but other rules could be implemented easily. Apart from the aforementioned properties, SpykeTorch is highly generic and capable of reproducing the results of various studies. Computations in the proposed framework are tensor-based and totally done by PyTorch functions, which in turn brings the ability of just-in-time optimization for running on CPUs, GPUs, or Multi-GPU platforms.

Highlights

  • For many years, scientist were trying to bring human-like vision into machines and artificial intelligence (AI)

  • With advanced techniques based on deep convolutional neural networks (DCNNs) (Rawat and Wang, 2017; Gu et al, 2018), artificial vision has never been closer to human vision

  • Spiking neural networks (SNNs) are energy-efficient for hardware implementation, because, spikes bring the opportunity of using event-based hardware as well as simple energy-efficient accumulators instead of complex energy-hungry multiply-accumulators that are usually employed in DCNN hardware (Furber, 2016; Davies et al, 2018)

Read more

Summary

INTRODUCTION

Scientist were trying to bring human-like vision into machines and artificial intelligence (AI). Information coding using the earliest spike time, which is proposed based on the rapid visual processing in the brain (Thorpe et al, 1996), needs only a single spike, making them super fast and more energy efficient These features together with hardware-friendliness of STDP, turn this type of SNNs into the best option for hardware implementation and online onchip training (Yousefzadeh et al, 2017). Several recent studies have shown the excellence of this type of SNNs in visual object recognition (Kheradpisheh et al, 2018; Mostafa, 2018; Mozafari et al, 2018; Mozafari et al, 2019; Falez et al, 2019; Vaila et al, 2019) With simulation frameworks such as Tensorflow (Abadi et al, 2016) and PyTorch (Paszke et al, 2017), developing and running DCNNs is fast and efficient.

TIME DIMENSION
PACKAGE STRUCTURE
TUTORIAL
Forward Pass
Source Code
COMPARISON
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call