Abstract
Data privacy is a fundamental challenge for Deep Learning (DL) in many applications. In this work, we propose <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">SecureTrain</i> , which aims to <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">carry out privacy-preserved DL model training efficiently and without accuracy loss</i> . SecureTrain enables joint linear and non-linear computation based on the Homomorphic Secret Share (HSS) technique,to carry out approximation-free non-polynomial operations, to achieve training stability and prevent accuracy loss. Meanwhile, it eliminates the time consuming Homomorphic <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">permutation operation</i> (Perm) and features an efficient piggyback design,by carefully devising the share set and exploiting the dataflow of the whole training process. This design significantly reduces the overall system training time. We analyze the computation and communication complexity of SecureTrain and prove its security. We implement SecureTrain and benchmark its performance with well-known dataset for both inference and training. For inference, SecureTrain not only ensures privacy-preserved inference, but achieves an inference speedup as high as 48× compared with state-of-the-art inference frameworks. For training, SecureTrain maintains the model accuracy and stability comparable to plaintext training, which is a sharp contrast to other schemes. To the best of knowledge, this is the first work that addresses two fundamental challenges, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">accuracy loss/training instability, and computation efficiency</i> , in privacy-preserved deep neural network training.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have