Abstract

Machine learning as a service (MLaaS) offers users the benefit of training state-of-the-art neural network models on fast hardware with low costs. However, it also brings security concerns since the user does not fully trust the cloud. To prove to the user that the ML training results are legitimate, existing approaches mainly adopt cryptographic techniques such as secure multi-party computation, which incur large overheads. In this paper, we model the problem of verifying ML training efforts as an anomaly detection problem. We design a verification system, dubbed <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VeriTrain</small> , which combines unsupervised anomaly detection approaches and hypothesis testing techniques to verify the legitimacy of training efforts on the MLaaS cloud. <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VeriTrain</small> is run inside trusted execution environments (TEEs) on the same cloud machine to ensure the integrity of its execution. We consider a threat model where the cloud model trainer is a lazy attacker and tries to fool <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VeriTrain</small> with minimum training effort. We perform extensive evaluations on multiple neural network models and datasets, which shows that <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VeriTrain</small> performs well in detecting parameter updates crafted by the attacker. We also implement <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VeriTrain</small> with Intel SGX and show that it only incurs moderate overheads.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call