Abstract

Deep neural networks have demonstrated impressive results in various cognitive tasks such as object detection and image classification. This paper describes a neuromorphic computing system that is designed from the ground up for energy-efficient evaluation of deep neural networks. The computing system consists of a non-conventional compiler, a neuromorphic hardware architecture, and a space-efficient microarchitecture that leverages existing integrated circuit design methodologies. The compiler takes a trained, feedforward network as input, compresses the weights linearly, and generates a time delay neural network reducing the number of connections significantly. The connections and units in the simplified network are mapped to silicon synapses and neurons. We demonstrate an implementation of the neuromorphic computing system based on a field-programmable gate array that performs image classification on the hand-wirtten 0 to 9 digits MNIST dataset with 99.37% accuracy consuming only 93uJ per image. For image classification on the colour images in 10 classes CIFAR-10 dataset, it achieves 83.43% accuracy at more than 11× higher energy-efficiency compared to a recent field-programmable gate array (FPGA)-based accelerator.

Highlights

  • Deep convolutional neural networks (CNNs) have shown state-of-the-art results on various tasks in computer vision, and their performance has become comparable to humans in some specific applications [1]

  • field-programmable gate array (FPGA) can be considered a general-purpose, not-highly-optimized neuromorphic processor, and the experiments are regarded as making software for the processor in part using existing hardware synthesis tools

  • We have presented a neuromorphic computing system that is newly designed from the microarchitecture to the compiler in order to forward-execute neural networks with minimum energy consumption

Read more

Summary

Introduction

Deep convolutional neural networks (CNNs) have shown state-of-the-art results on various tasks in computer vision, and their performance has become comparable to humans in some specific applications [1]. They contain a huge number of weight parameters (e.g., 108 , [2]), and the inference by the models is computationally expensive. The bias is associated with a threshold value, above which a neuron starts firing This model is coarse and highly abstracted, it provides the best predictive performance in practical machine learning applications

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call