Abstract

Deep neural networks (DNNs) form a critical infrastructure supporting various systems, spanning from the iPhone neural engine to imaging satellites and drones. The design of these neural cores is often proprietary or a military secret. Nevertheless, they remain vulnerable to model replication attacks that seek to reverse engineer the network's synaptic weights. In this article, we propose SCANet (Superparamagnetic-MTJ Crossbar Array Networks), a novel defense mechanism against such model stealing attacks by utilizing the innate stochasticity in superparamagnets. When used as the synapse in DNNs, superparamagnetic magnetic tunnel junctions (s-MTJs) are shown to be significantly more secure than prior memristor-based solutions. The thermally induced telegraphic switching in the s-MTJs is robust and uncontrollable, thus thwarting the attackers from obtaining sensitive data from the network. Using a mixture of both superparamagnetic and conventional MTJs in the neural network (NN), the designer can optimize the time period between the weight updation and the power consumed by the system. Furthermore, we propose a modified NN architecture that can prevent replication attacks while minimizing power consumption. We investigate the effect of the number of layers in the deep network and the number of neurons in each layer on the sharpness of accuracy degradation when the network is under attack. We also explore the efficacy of SCANet in real-time scenarios, using a case study on object detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call