Abstract

While convolutional neural networks (CNNs) continue to renew state-of-the-art performance across many fields of machine learning, their hardware implementations tend to be very costly and inflexible. Neuromorphic hardware, on the other hand, targets higher efficiency but their inference accuracy lags far behind that of CNNs. To bridge the gap between deep learning and neuromorphic computing, we present bitstream-based neural network, which is both efficient and accurate as well as being flexible in terms of arithmetic precision and hardware size. Our bitstream-based neural network (called SC-CNN) is built on top of CNN but inspired by stochastic computing (SC), which uses bitstreams to represent numbers. Being based on CNN, our SC-CNN can be trained with backpropagation, ensuring very high inference accuracy. At the same time our SC-CNN is deterministic, hence repeatable, and is highly accurate and scalable even to large networks. Our experimental results demonstrate that our SC-CNN is highly accurate up to ImageNet-targeting CNNs, and improves efficiency over conventional digital designs ranging through 50–100% in operations-per-area depending on the CNN and the application scenario, while losing <1% in recognition accuracy. In addition, our SC-CNN implementations can be much more fault-tolerant than conventional digital implementations.

Highlights

  • In a broad sense of the term, neuromorphic system refers to a system engineered based on the organizing principles of the nervous system (Mead, 1990)

  • We show that our stochastic computing (SC)-convolutional neural networks (CNNs) can be over 100% more efficient in terms of operations-per-area over conventional digital design when the same hardware is used for multiple CNN applications of varying precision requirements

  • We show that our neuromorphic SC-CNN is very scalable, achieving very high efficiency at a high throughput level; our 4D-parallel neuromorphic SC-CNN can give nearly 100 times better efficiency in area-delay product (ADP) over 2D-parallel architectures

Read more

Summary

INTRODUCTION

In a broad sense of the term, neuromorphic system refers to a system engineered based on the organizing principles of the nervous system (Mead, 1990). Our experimental results demonstrate that our SC-CNN can be as efficient as conventional digital designs up to ImageNettargeting CNNs, such as AlexNet (Krizhevsky et al, 2012) and GoogleNet (Szegedy et al, 2015), with

SC-Based Neural Network
Neuromorphic Computing
Deep Learning Hardware
DYNAMIC PRECISION SCALING SC-CNN
Analysis of Baseline SC-MAC
Acceleration of Neural Network
Dynamic Precision Scaling Extension
Half-Range Specialization
DESIGN OPTIMIZATIONS FOR DPS SC-CNN
Design Flow
Determining Data Scaling Parameters
Determining Software Precision of Each Layer
Determining Hardware Precision
Neuromorphic Optimizations
Tight Integration of SRAM and SC-MAC
Experimental Setup
Area Overhead of our DPS SC-CNN
Effect of Software and Hardware Precision on ADP
Multi-Application Scenario
Single Application Comparison
Efficiency of Neuromorphic Architecture
Fault Tolerance
Feature Comparison With Previous Work
CONCLUSION
Findings
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call