Abstract

To lower cost and increase the utilization of Cloud Field-Programmable Gate Arrays (FPGAs), researchers have recently been exploring the concept of multi-tenant FPGAs, where multiple independent users simultaneously share the same remote FPGA. Despite its benefits, multi-tenancy opens up the possibility of malicious users co-locating on the same FPGA as a victim user, and extracting sensitive information. This issue becomes especially serious when the user is running a machine learning algorithm that is processing sensitive or private information. To demonstrate the dangers, this paper presents a remote, power-based side-channel attack on a deep neural network accelerator running in a variety of Xilinx FPGAs and also on Cloud FPGAs using Amazon Web Services (AWS) F1 instances. This work in particular shows how to remotely obtain voltage estimates as a deep neural network inference circuit executes, and how the information can be used to recover the inputs to the neural network. The attack is demonstrated with a binarized convolutional neural network used to recognize handwriting images from the MNIST handwritten digit database. With the use of precise time-to-digital converters for remote voltage estimation, the MNIST inputs can be successfully recovered with a maximum normalized cross-correlation of 79% between the input image and the recovered image on local FPGA boards and 72% on AWS F1 instances. The attack requires no physical access nor modifications to the FPGA hardware.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call