Abstract

Privacy-preserving machine learning as a service (PP-MLaaS) can achieve the secure model computation towards the client's private input through a series of privacy-preserving operations. However, existing PP-MLaaS schemes are suffering from low computational efficiency thwarting their application in real-world scenarios. To mitigate this issue, this article seeks to identify the main algorithmic problems biting computation efficiency and present possible enablers to accelerate the process of secure model computation. The investigation consists of four hierarchical parts. In the first part, existing PP-MLaaS frameworks are reviewed, and the problem statement is discussed in the next part, in which the possible issues that lead to inefficient computation are illustrated from computational logic and computational model, respectively. In the third part, we investigate two potential strategies to improve the calculation efficiency of privacy-preserving neural computing, involving the optimization of cost hierarchy in the calculation process and the crypto-friendly pruning in the neural computation model. Research directions and open topics for efficient PP-MLaaS are discussed in the last part, including rotation-free, neural architecture searching, hardware-aware, and nonlinearity-efficient PPMLaaS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call