Abstract

SummaryDeep learning has shown its great potential in real‐world applications. However, users(clients) who want to use deep learning applications need to send their data to the deep learning service provider (server), which can make the client's data leak to the server, resulting in serious privacy concerns. To address this issue, we propose a protocol named EPIDL to perform efficient and secure inference tasks on neural networks. This protocol enables the client and server to complete inference tasks by performing secure multi‐party computation (MPC) and the client's private data is kept secret from the server. The work in EPIDL can be summarized as follows: First, we optimized the convolution operation and matrix multiplication, such that the total communication can be reduced; Second, we proposed a new method for truncation following secure multiplication based on oblivious transfer and garbled circuits, which will not fail and can be executed together with the ReLU activation function; Finally, we replace complex activation function with MPC‐friendly approximation function. We implement our work in C++ and accelerate the local matrix computation with CUDA support. We evaluate the efficiency of EPIDL in privacy‐preserving deep learning inference tasks, such as the time to execute a secure inference on the MNIST dataset in the LeNet model is about 0.14 s. Compared with the state‐ofthe‐art work, our work is 1.8–98 faster over LAN and WAN, respectively. The experimental results show that our EPIDL is efficient and privacy‐preserving.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call