Abstract

AbstractApproximate arithmetic circuits are designed to improve energy efficiency and performance in error-tolerant applications. The approximate arithmetic circuits behave differently for different applications. Also, the approximation may violate basic algebraic properties like commutativity, associativity, identity, etc. The violation of algebraic properties makes the output of the approximate circuits dependent on the order of the inputs. The existing work [1] has been on approximate adders and their application to image addition. This paper investigates the impact of approximation and commutativity in the 12 compressor-based multipliers for image processing applications and CNNs. The first observation is that the accuracy of the neural network drops drastically if the multiplication of zero is approximated. Second, the outcomes of neural networks and error-resilient image processing applications tend to differ depending on the order of the inputs. For instance, when the Yang2 [2] design is implemented on ResNet18, we observe an accuracy of 58.07% for \(a \times b\) and 69.4% for \(b \times a\). Based on our findings, we propose that prior knowledge can be used to organize input data to increase output quality for these applications.KeywordsError-resilienceApproximate computingCompressorsImage processing

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call