Abstract

As the end of Moore's Law approaches, electronic system designers must find ways to keep up with the ever increasing computational demands of the modern era. Some computationally intensive applications, such as multimedia processing, computer vision and artificial intelligence, present a unique feature that makes them especially suitable for hardware-level optimizations: their inherent robustness to noise and errors. This allows circuit designers to relax the constraint that arithmetic operations, such as multiplications and additions, must be completely accurate. Instead, approximations can be used in the arithmetic units, enabling system-level reductions in hardware area and power consumption, as well as improvements in performance, while hardly affecting the output of the final application. In this work, we explore two approximate arithmetic techniques. First, we consider approximations at the circuit design level by implementing several approximate multiplier units and evaluating their accuracy when used in executing YOLOv3, a state-of-the-art camera-based object detection deep neural network. Second, we apply the technique of overscaling to induce approximations in adder circuits by aggressively undervoltaging and overclocking them, and we compare the behavior of exact and approximate adders under these conditions. We find that, on one hand, some approximate multipliers are able to execute the YOLO network with almost no effect on the results, and on the other, approximate adder circuits are much more resilient to overscaling techniques than exact adders.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call