Abstract

In the era of big data, there is a strong urge for novel methodologies of computing large amount of unstructured data with short latency and low power. Toward this goal, in-memory computing has emerged as a paradigm shift to enable processing the data directly within or close to the memory, thus overcoming the memory wall typical of the von Neumann architecture [1]. Computation within resistive memory devices, such as resistive switching memory (RRAM) and phase change memory (PCM) has the additional advantage of physical computing, where data are processed via fundamental physical laws, such as the Ohm's law and the Kirchhoff's law, thus enabling a massive parallelism and the consequent acceleration of computational tasks, such as the matrix-vector multiplication (MVM). In this work, we will demonstrate an extreme speedup for solving matrix algebra problems, such as solution of linear systems or calculation of eigenvectors, via MVM in crosspoint arrays of resistive memory devices with feedback configuration [2].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call