Abstract

Spiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization method for hardware friendly SNN architecture based on a modified rate coding scheme called Binary Streamed Rate Coding (BSRC). BSRC combines the features of both rate and temporal coding. In addition, by employing a built-in randomizer, the BSRC SNN model provides a higher accuracy and faster training. We also present SNN optimization methods including structure optimization and weight quantization. Extensive evaluations with MNIST SNNs demonstrate that the structure optimization of SNN (81-30-20-10) provides 183.19 times reduction in hardware compared with SNN (784-800-10), while providing an accuracy of 95.25%, a small loss compared with 98.89% and 98.93% reported in the previous works. Our weight quantization reduces 32-bit weights to 4-bit integers leading to further hardware reduction of 4 times with only 0.56% accuracy loss. Overall, the SNN model (81-30-20-10) optimized by our method shrinks the SNN’s circuit area from 3089.49 mm2 for SNN (784-800-10) to 4.04 mm2—a reduction of 765 times.

Highlights

  • In recent years, various types of Artificial Neural Network (ANN) have been studied as effective solutions for many object recognition and image classification problems with increasing accuracy.The Modified National Institute of Standards and Technology (MNIST) dataset is one of the popular benchmarks for testing different types of ANN due to its simplicity

  • We developed a hardware-friendly rate coding Spiking neural networks (SNN) model which can overcome drawbacks

  • When we apply the proposed Binary Streamed Rate Coding (BSRC) method to an SNN structure of (784-800-10) with floating-point weights using the full-scale MNIST dataset, the model achieves an accuracy of 98.84% after only 84 epochs

Read more

Summary

Introduction

Various types of Artificial Neural Network (ANN) have been studied as effective solutions for many object recognition and image classification problems with increasing accuracy. A direct supervised training algorithm for SNN called STBP was published [11] It can reportedly achieve a very high accuracy of. Further reduced the number of required weight bits into two bits for the inference process using an ANN training algorithm called BinaryConnect [20] They converted the ANN model to an SNN model and reported an accuracy of 99.43% on MNIST dataset. The design of [14] reported a reduced power consumption of 0.477 W, while achieving a relatively high accuracy of 97.06% for the MNIST dataset. Direct supervised training is a method that attempts to train an SNN directly by using approximated version of spiking function [11] These training algorithms should have the capability to utilize spatial domain property to increase the training accuracy [22]. We efficiently combine the time and spatial domain information to obtain higher accuracy than other algorithms

Overall Structure of SNN
For a pixelavalue
Spiking Neural Network Model
Optimization of SNN Model
BSRC Based Training
SNN Structure Optimization
Maximum
SNN Weight Quantization
SNN output layer
Integer Threshold
Performance Evaluation
11. Binary
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call