Abstract

Privacy-preserving neural inference helps protect both the user input data and the model weights from being leaked to others during the inference of a deep learning model. To achieve data protection, the inference is often performed within a secure domain, and the final result is revealed in plaintext. Nevertheless, performing the computations in the secure domain incurs about a thousandfold overhead compared with the insecure version, especially when the involved operations of the entire model are mapped to the secure domain, which is the computation scheme adopted by the existing works. This work is inspired by the transfer learning technique, where the weights of some parts of the model layers are transferred from a publicly available, pre-built deep learning model, and it opens a door to further boost the execution efficiency by allowing us to do the secure computations selectively on parts of the transferred model. We have built a compiler framework, SecureTVM, to automatically translate a trained model into the secure version, where the model layers to be protected can be selectively configured by its model provider. As a result, SecureTVM outperforms the state of the art, CrypTFlow2, by a factor of 55 for the transfer learning model. We believe that this work takes a step forward toward the practical uses of privacy-preserving neural inference for real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call