Abstract

Deep neural network based models has showed excellent ability in solving complex learning tasks in computer vision, speech recognition and natural language processing. Deep neural network learns data representations by solving specific learning task from the input data. Several optimization algorithms such as SGD, Momentum, Nesterov, RMSProp, and Adam were commonly used to minimize the loss function of deep neural networks model. At some point, the model may leak some information about the training data. To mitigate this leakage, differentially private optimization algorithm can be used to train the neural network model. In this paper, differentially private Momentum, Nesterov, RMSProp, and Adam algorithms were developed and used to train deep neural networks models like DNN and CNN. It was shown that those differentially private optimization algorithms can perform better than differentially private SGD, yielding higher model accuracy and faster convergence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.