Abstract

This article provides an overview of recently proposed deep in-memory architectures (DIMAs) in SRAM for energyand latency-efficient hardware realization of machine learning (ML) algorithms. DIMA tackles the data movement problem in von Neumann architectures head-on by deeply embedding mixed-signal computations into a conventional memory array. In doing so, it trades off its computational signal-to-noise ratio (compute SNR) with energy and latency, and therefore, it represents an analog form of approximate computing. DIMA exploits the inherent error immunity of ML algorithms and SNR budgeting methods to operate its analog circuitry in a low-swing/low-compute SNR regime, thereby achieving >100× reduction in the energy-delay product (EDP) over an equivalent von Neumann architecture with no loss in inference accuracy. This article describes DIMA's computational pipeline and provides a Shannon-inspired rationale for its robustness to process, temperature, and voltage variations and design guidelines to manage its analog nonidealities. DIMA's versatility, effectiveness, and practicality demonstrated via multiple silicon IC prototypes in a 65-nm CMOS process are described. A DIMA-based instruction set architecture (ISA) to realize an end-to-end application-toarchitecture mapping for the accelerating diverse ML algorithms is also presented. Finally, DIMA's fundamental tradeoff between energy and accuracy in the low-compute SNR regime is analyzed to determine energy-optimum design parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call