Abstract

The accuracy and power consumption of resistive crossbar circuits in use for neuromorphic computing is restricted by the process variation of the resistance-switching (memristive) device and the power overhead of the mixed-signal circuits, such as analog-digital converters (ADCs) and digitalanalog converters (DACs). Reducing the signal- and weightresolution can improve the robustness against process variation, relax requirements for mixed-signal devices, and simplify the implementation of crossbar circuits. This work aims to establish a methodology to achieve low-resolution dense layers for CNNs in terms of network architecture selection and quantization method. To this end, this work studies the impact of the dense layer configuration on the required resolution for its inputs and weights in a small convolutional neural network (CNN). This analysis shows that carefully selecting the network architecture for the dense layer can significantly reduce the required resolution for its input signals and weights. This work reviews criteria for appropriate architecture selection and the quantization method for the binary and ternary neural network (BNN and TNN) to reduce the weight resolution of CNN dense layers. Furthermore, this work presents a method to reduce the input resolution for the dense layer down to one bit by analyzing the distribution of the input values. A small CNN for inference with one-bit quantization for inputs signals and weights can be realized with only 0.68% accuracy degradation for MNIST Dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call