Abstract

Resistive Random access memory (RRAM) devices together with the material implication (IMPLY) logic are a promising computing scheme for realizing energy efficient reconfigurable computing hardware for edge computing applications. This approach has been recently shown to enable the in-memory implementation of Binarized Neural Networks. However, an accurate analysis of the performance achieved on a real classification task are still missing. In this work, we train and estimate the performance of an IMPLY-based implementation of a multilayer perceptron (MLP) BNN and highlight its main reliability challenges by using a physics-based RRAM compact model calibrated on three RRAM technologies from the literature. We then show how the smart IMPLY (SIMPLY) architecture solves the reliability issues of conventional IMPLY architectures and compare its performance with respect to conventional solutions considering different parallelization degree. The worst-case energy estimates for an inference task performed on the trained network, show that the SIMPLY implementation results in a >46 energy-delay-product (EDP) improvement with respect to a conventional low-power embedded system implementation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.