Abstract

This paper proposes that approximation by reducing bit-precision and using inexact multiplier can save power consumption of digital multilayer perceptron accelerator during the classification of MNIST (inference) with negligible accuracy degradation. Based on the error sensitivity precomputed during the training, synaptic weights with less sensitivity are approximated. Under given bit-precision modes, our proposed algorithm determines bit precision for all synapse to minimize power consumption for given target accuracy. For entire network, earlier layer can be more approximated since it has lower error sensitivity. Proposed algorithm can save power 57.4 percent while accuracy is degraded about 1.7 percent. After approximation, retraining with few iterations can improve the accuracy while maintaining power consumption. The impact of different training conditions on the approximation is also studied. Training with small quantization error (less bit precision) allows more power saving in inference. It also shows that enough number of iteration during the training is important for approximation in inference. Network with more layers is more sensitive to the approximation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call