Abstract

Various deep neural network (DNN) accelerators have been proposed for artificial intelligence (AI) inference on edge devices. On the other hand, hardware security issues of the DNN accelerator have not been discussed well. Trained DNN models are important intellectual property and a valuable target for adversaries. In particular, when a DNN model is implemented on an edge device, adversaries can physically access the device and try to reveal the implemented DNN model. Therefore, the DNN execution environment on an edge device requires countermeasures such as data encryption on off-chip memory against various reverse-engineering attacks. In this paper, we reveal DNN model parameters by utilizing correlation power analysis (CPA) against a systolic array circuit that is widely used in DNN accelerator hardware. Our experimental results show that the adversary can extract trained model parameters from a DNN accelerator even if the DNN model parameters are protected with data encryption. The results suggest that countermeasures against side-channel leaks are important for implementing a DNN accelerator on FPGA or ASIC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call