Abstract
Resistive Random access memory (RRAM) devices together with the material implication (IMPLY) logic are a promising computing scheme for realizing energy efficient reconfigurable computing hardware for edge computing applications. This approach has been recently shown to enable the in-memory implementation of Binarized Neural Networks. However, an accurate analysis of the performance achieved on a real classification task are still missing. In this work, we train and estimate the performance of an IMPLY-based implementation of a multilayer perceptron (MLP) BNN and highlight its main reliability challenges by using a physics-based RRAM compact model calibrated on three RRAM technologies from the literature. We then show how the smart IMPLY (SIMPLY) architecture solves the reliability issues of conventional IMPLY architectures and compare its performance with respect to conventional solutions considering different parallelization degree. The worst-case energy estimates for an inference task performed on the trained network, show that the SIMPLY implementation results in a >46 energy-delay-product (EDP) improvement with respect to a conventional low-power embedded system implementation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have