Abstract

Pre-trained models such as BERT have made great achievements in natural language processing tasks in recent years. In this paper, we investigate the privacy-preserving pre-training based neural network inference in a two-server framework based on additive secret sharing technique. Our protocol allows a resource-restrained client to request two powerful servers to cooperatively process the natural processing tasks without revealing any useful information about its data. We first design a series of secure sub-protocols for non-linear functions used in BERT model. These sub-protocols are expected to have broad applications and of independent interest. Based on the building sub-protocols, we propose SecBERT, a privacy-preserving pre-training based neural network inference protocol. SecBERT is the first cryptographically secure privacy-preserving pre-training based neural network inference protocol. We show security, efficiency and accuracy of SecBERT protocol through comprehensive theoretical analysis and experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call