Abstract

Compute-in-memory (CIM), where information can be processed and stored at the same locations, is emerging as a promising paradigm to address the memory wall bottleneck in traditional Von Neumann architectures. Static random-access memory (SRAM) has been demonstrated as a mature candidate for CIM accelerator for deep neural networks (DNNs) due to its availability in advanced technology nodes. However, as SRAM is volatile and could not hold weight after power down, the necessity for downloading models from the cloud to inference engine causes potential threats such as model leaking. Also, saving raw weights of the DNN model stationary in the memory cells will increase the vulnerabilities. This work aims at developing a secure inference engine with a lightweight yet effective countermeasure to protect the DNN models in SRAM-based CIM architecture. We propose a secure XOR-CIM engine with a modified reverse secure sketch protocol to enable on-chip authentication and key processing for XOR-based stream cipher encrypted models. In the XOR-CIM core, we modify the six-transistor SRAM bit cell with dual wordlines to implement XOR decryption without sacrificing the parallel computation’s efficiency. The evaluations at 28 nm show that the XOR-CIM could enhance security, achieving comparable energy efficiency and no throughput loss, with negligible area overhead compared with the normal-CIM design without encryption.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.