Abstract

Edge computing has been shown to be a promising solution that could relax the burden imposed onto the network infrastructure by the increasing amount of data produced by smart devices. However, reconfigurable ultra-low power computing architectures are needed. RRAM devices together with the material implication logic (IMPLY) are a promising solution for the development of low-power reconfigurable logic-in-memory (LiM) hardware. Nevertheless, traditional approaches suffer from several issues introduced by the circuit topology and device non-idealities. Recently, SIMPLY, a smart LiM architecture based on the IMPLY, has been proposed and shown to solve the common issues of traditional architectures. Here, we use a physics-based RRAM compact model calibrated on three RRAM technologies to further analyze the performance of SIMPLY in typical operating conditions, when the repeated execution of logic operation on the same group of devices is considered. The results show that, compared to the conventional IMPLY architecture, SIMPLY spares more than 40% of the high voltage pulses on average even when complex operations are considered (e.g., the 1-bit half adder). We also show how SIMPLY can implement the set of operations required for the implementation of Binarized Neural Networks (BNN) and benchmark its performance against other memristor-based BNN in-memory accelerator from the literature. The results suggest that our approach is more than two orders of magnitude efficient compared to the state of the art reconfigurable in-memory computing approach and could potentially reach the performance of specialized BNN analog hardware accelerators with appropriate device-circuit co-design strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call