Abstract
Approximate Computing is frequently mentioned as a new computing paradigm that enables improving energy efficiency at the expense of quality. But what is Approximate Computing? How can it be used in a truly innovative way that goes beyond reducing precision or approximating complex operations or algorithms at the expense of accuracy as it is already done regularly in the VLSI signal processing community for implementing complex video, audio, or communication systems? In this talk, we focus on Approximate Computing as a new paradigm to deal specifically with one of the most important problems of the semiconductor industry today: the reliability issues and uncertainties in modern process technologies that appear especially at low voltages. We show how Approximate Computing can and should be interpreted as a systematic idea of dealing with these reliability issues that are statistical in nature and appear only at run-time. In this sense, the interpretation of the term is significantly different from the static design-time interpretation of the term used in the VLSI signal processing community. Approximations and corresponding circuits serve as a means to ensure graceful performance degradation at run-time in the presence of uncertainties or errors, rather than simply reducing complexity once at design time. This ability then allows not only for circuits with reduced area and better energy efficiency. It also enables better overall performance metrics since each chip delivers at every moment of its life the best possible (adjustable) energy and quality trade off with an energy-proportional behavior adjusted to its operating conditions and user demands.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have