Abstract

This paper presents the hardware architecture and VLSI implementation of a binarized neural network (BNN). As a modification of convolutional neural network (CNN), BNN constrains all activations and weights to be +1 or −1, making it very appealing to low-power ASIC design. In this paper, BNN is proven to be highly power efficient and accurate for computer vision tasks. We use pedestrian and car detections as examples to showcase the capability of the BNN chip design. The total memory use of all weights in the BNN is only 22K bytes, which is significantly less than a typical convolutional neural network. Evaluated using INRIA and CIFAR-10 datasets, our BNN chip can achieve an average accuracy of 96.5%, which is much higher than traditional computer vision approaches such as histogram of oriented gradients with support vector machine. Our design achieves a power efficiency of 20 TOp/s/w, far exceeding most of the mainstream CNN chips. Therefore, the proposed low power hardware architecture of BNN enables deep learning on mobile embedded platforms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call