Abstract

How to protect the privacy of training data in deep learning has been the subject of increasing amounts of related research in recent years. Private Aggregation of Teacher Ensembles (PATE) uses transfer learning and differential privacy methods to provide a broadly applicable data privacy framework in deep learning. PATE combines the Laplacian mechanism and the voting method to achieve deep learning privacy classification. However, the Laplacian mechanism may greatly distort the histogram vote counts of each class. This paper proposes a novel exponential mechanism with PATE to ensure the privacy protection. This proposed method improves the protection effect and accuracy through the screening algorithm and uses the differential privacy combination theorems to reduce the total privacy budget. The data-dependent analysis demonstrates that the exponential mechanism outperforms the original Laplace mechanism. Experimental results show that the proposed method can train models with improved accuracy while requiring a smaller privacy budget when compared to the original Pate framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call