Abstract

Domain adaptation is a viable solution for deep learning with small data. However, domain adaptation models trained on data with sensitive information may be a violation of personal privacy. In this article, we proposed a solution for unsupervised domain adaptation, called DP-CUDA, which is based on differentially private gradient projection and contradistinguisher. Compared with the traditional domain adaptation process, DP-CUDA involves searching for domain-invariant features between the source domain and target domain first and then transferring knowledge. Specifically, the model is trained in the source domain by supervised learning from labeled data. During the training of the target model, feature learning is used to solve the classification task in an end-to-end manner using unlabeled data directly, and the differentially private noise is injected into the gradient. We conducted extensive experiments on a variety of benchmark datasets, including MNIST, USPS, SVHN, VisDA-2017, Office-31, and Amazon Review, to demonstrate our proposed method’s utility and privacy-preserving properties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call