Abstract

With the rapid increase in the development of Deep Learning methodologies, Deep Neural Networks (DNNs) are now being commonly deployed in smart systems (e.g. autonomous vehicles) and high-end security applications (e.g. face recognition, biometric authentication, etc.). The training of such DNN models often requires exclusive valuable training datasets, enormous computational resources, and expert fine-tuning skills. Hence, a trained DNN model can be regarded as valuable proprietary Intellectual Property (IP). Piracy of such DNN IPs has emerged as a major concern, with increasing trends of illegal copying and redistribution. A number of mitigation approaches targeting DNN IP protection have been proposed in recent years. In this work, we target two recently proposed DNN IP protection schemes: (a) Chaotic Map theory based encryption of the weight parameters, and (b) traditional block cipher based encryption of the weights. We demonstrate attacks on two recent DNN IP protection techniques, with one technique each belonging to the above-mentioned schemes, under a pragmatic attack model. We also propose a novel DNN IP protection technique based on selective encryption of the weight parameters, termed LEWIP to mitigate the exposed weaknesses, while having low implementation and performance overheads. Finally, we demonstrate the effectiveness of the LEWIP technique against state-of-the-art DNN implementations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call