Abstract

The complexity of the computational problems is rising faster than the computational platforms’ capabilities which are also becoming increasingly costly to operate due to their increased need for energy. This forces researchers to find alternative paradigms and methods for efficient computing. One promising paradigm is accelerating compute-intensive kernels using in-memory computing accelerators, where data movements are significantly reduced. Another increasingly popular method for improving energy efficiency is approximate computing. In this paper, we propose a methodology for efficient approximate in-memory computing. To maximize energy savings for a given approximation constraints, a hybrid approach is presented combining both voltage and precision scaling. This can be applied to an associative memory-based architecture that can be implemented today using CMOS memories (SRAM) but can be seamlessly scaled to emerging ReRAM-based memory technology later with minimal effort. For the evaluation of the proposed methodology, a diverse set of domains is covered, such as image processing, machine learning, machine vision, and digital signal processing. When compared to full-precision, unscaled implementations, average energy savings of $5.17{\times}$ and $59.11{\times}$ , and speedups of $2.1{\times}$ and $3.24{\times}$ in SRAM-based and ReRAM-based architectures, respectively, are reported.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call