The deployment of deep learning applications has to address the increasing privacy concerns when using private and sensitive data for training. A conventional deep learning model is prone to privacy attacks that can recover the sensitive information from either model parameters or accesses to the inference model. Recently, differential privacy (DP) has been proposed to offer provable privacy guarantees by randomizing the training process of neural networks. However, many approaches tend to provide the worst case privacy protection for model publishing, inevitably impairing the accuracy of the trained models. Thus, we present a novel private knowledge transfer strategy, where the private teacher trained on sensitive data is not publicly accessible but the student models can be released with privacy guarantees. In this paper, a three-player (teacher-student-discriminator) learning framework, Private Knowledge Distillation with Generative Adversarial Networks (PKDGAN), is proposed, where the student acquires the distilled knowledge from the teacher and is trained with the discriminator to generate similar outputs as the teacher. Moreover, a cooperative learning strategy is also suggested to support the collective training of multiple students against the discriminator when each student is with insufficient unlabelled training data. To enforce rigorous privacy guarantees, PKDGAN applies a Rényi differential privacy mechanism throughout the training process, and use it with a moment accountant technique to track the privacy cost. PKDGAN allows students to be trained with unlabelled public data and very few epochs, which avoids the exposure of training data while ensuring model performance. In the experiments, PKDGAN is found to have consistently good performance on various datasets (MNIST, SVHN, CIFAR-10, and Market-1501). When compared to prior works [1], [2], PKDGAN exhibits 5-82% accuracy loss improvement without compromising any privacy guarantee.
Read full abstract