Abstract

AbstractArtificial neural networks and image processing are the error tolerant applications that require massive workloads to be executed within certain power limits. Energy efficient multipliers on an embedded processor are crucial for such applications. In this paper, a highly diminished yet accurate design of an approximate truncated multiplier is presented. A compact error compensation circuit is presented that aims to mitigate the truncation error. To reduce the energy consumption further, the voltage overscaling method is used. The performance of the circuit is optimized by adjusting the two approximation knobs of circuit pruning and voltage scaling. Simulation results for a truncation factor of 8 with an MRED of 0.01 demonstrate an 11.4% reduction in energy consumption compared to that of the 16‐bit radix‐4 exact multiplier. Application based evaluation is performed by using the approximate multipliers on image multiplication application and a high value of PSNR (43.17 dB) is obtained. When evaluated using a handwritten digit recognition application based on convolutional neural networks (CNN), an accuracy of 97.85% is obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call