Abstract

The slowdown of CMOS technology scaling has placed architectures and algorithms on focus for future performance improvements in nanoscale computing systems. Two promising approaches at algorithmic level are approximate computing (AC) and probabilistic data structures (PDSs) that employ the tolerance of an application to small deviations in the results for reducing the complexity of the hardware implementation. AC focuses on applications that process numerical data and relies mostly on approximate (or inexact) low-level arithmetic operations. Instead, PDSs target categorical data and rely on shared data structures and other higher-level simplifications that introduce probabilistic deviations even when all operations are exact. Both AC and PDSs have been able to dramatically reduce the cost in some applications, but they are so far completely disconnected in the application domains, the abstraction levels, and the research communities. In this article, we introduce probabilistic approximate computing (PAC), a new paradigm to use application tolerance for small deviations to reduce the implementation complexity of data structures and hardware when implemented with nanoscale memory technologies. Its goal is to have data structures on which both AC and probabilistic techniques are used in a synergetic way to improve efficiency, while keeping deviations within acceptable margins.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call