Abstract

To realize a large-scale Spiking Neural Network (SNN) on hardware for mobile applications, area and power optimized electronic circuit design is critical. In this work, an area and power optimized hardware implementation of a large-scale SNN for real time IoT applications is presented. The analog Complementary Metal Oxide Semiconductor (CMOS) implementation incorporates neuron and synaptic circuits optimized for area and power consumption. The asynchronous neuronal circuits implemented benefit from higher energy efficiency and higher sensitivity. The proposed synapse circuit based on Binary Exponential Charge Injector (BECI) saves area and power consumption, and provides design scalability for higher resolutions. The SNN model implemented is optimized for 9 × 9 pixel input image and minimum bit-width weights that can satisfy target accuracy, occupies less area and power consumption. Moreover, the spiking neural network is replicated in full digital implementation for area and power comparisons. The SNN chip integrated from neuron and synapse circuits is capable of pattern recognition. The proposed SNN chip is fabricated using 180 nm CMOS process, which occupies a 3.6 mm2 chip core area, and achieves a classification accuracy of 94.66% for the MNIST dataset. The proposed SNN chip consumes an average power of 1.06 mW—20 times lower than the digital implementation.

Highlights

  • This paper presents the implementation of a large-scale Spiking Neural Network (SNN) Artificial Intelligence (AI) hardware based on analog Complementary Metal Oxide Semiconductor (CMOS) for real time IoT applications

  • This work presents a hardware implementation of a large-scale SNN optimized for area and power, which is aimed at real-time AI/IoT applications

  • The SNNs allow for compact hardware implementation that is better suited for mobile or edge AI applications, if compact synapse and neuron circuits are used

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Neuromorphic architectures are energy efficient, but perform parallel signal processing, fault tolerance, and can be configurable They can be realized by numerous silicon-based technologies, large-scale architectures, and computational models of neural elements [13,14]. Neuromorphic architectures based on SNN benefit from computation organization—achieving high energy efficiency by co-locating computing (neurons) and memory (synapse) elements, and information representation—less power consumption by event-driven spikes encoding information [18]. SNNs process the information by spike propagation, which enables it to accelerate the computational speed and improve energy efficiency [20] It incorporates biologically plausible neuron models in acquiring the temporal dynamics of the neural membrane [21]. To realize large-scale SNN hardware requires power and area optimized computational (neuron) and memory (synapse) elements.

Spiking Neural Network Model
Leaky Integrate and Fire Model
SNN Implementation and Circuit Design
BSRC-Based SNN Architecture
Analog Spiking Neural Network
Neuron Circuit
Synapse Circuit
Circuit Simulation
Fully Digital Implementation of Spiking Neural Network
Implementation of Analog SNN
Implementation of Digital SNN
Performance Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call