Abstract

Well over a decade ago, many believed that an engine of growth driving the semiconductor and computing industries---captured nicely by Gordon Moore’s remarkable prophecy (Moore’s law)---was speeding towards a dangerous cliff-edge. Ranging from expressions of concern to doomsday scenarios, the exact time when serious hurdles would beset us varied quite a bit---some of the more optimistic warnings giving Moore’s law until. Needless to say, a lot of people have spent time and effort with great success to find ways for substantially extending the time when we would encounter the dreaded cliff-edge, if not avoiding it altogether. Faced with this issue, we started approaching this in a decidedly different manner---one which suggested falling off the metaphorical cliff as a design choice, but in a controlled way. This resulted in devices that could switch and produce bits that are correct, namely of having the intended value, only with a probabilistic guarantee. As a result, the results could in fact be incorrect. Such devices and associated circuits and computing structures are now broadly referred to as inexact designs, circuits, and architectures. In this article, we will crystallize the essence of inexactness dating back to 2002 through two key principles that we developed: (i) that of admitting error in a design in return for resource savings, and subsequently (ii) making resource investments in the elements of a hardware platform proportional to the value of information they compute. We will also give a broad overview of a range of inexact designs and hardware concepts that our group and other groups around the world have been developing since, based on these two principles. Despite not being deterministically precise, inexact designs can be significantly more efficient in the energy they consume, their speed of execution, and their area needs, which makes them attractive in application contexts that are resilient to error. Significantly, our development of inexactness will be contrasted against the rich backdrop of traditional approaches aimed at realizing reliable computing from unreliable elements, starting with von Neumann’s influential lectures and further developed by Shannon-Weaver and others.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.