Abstract

The training of deep neural networks relies on massive high-quality labeled data which is expensive in practice. To tackle this problem, domain adaptation is proposed to transfer knowledge from label-rich source domain to unlabeled target domain to learn a classifier that can well classify target data. However, people don't consider privacy issues in domain adaptation. In this paper, we introduce a novel method that builds an effective model without sharing sensitive data between source and target domain. Target domain party can benefit from label-rich source domain without revealing its privacy data. We transfer the traditional domain adaptation into a federated setting, where a global server contains a shared global model. Additionally, homomorphic encryption (HE) algorithm is used to guarantee the computing security. Experiments show that our method performs effectively without reducing the accuracy. Our method can achieve secure knowledge transfer and privacy-preserving domain adaptation.

Highlights

  • Due to the improvement of computing power and rich data resources, deep learning has achieved outstanding performance in various applications

  • We present the first method for privacy-preserving unsupervised domain adaptation that the target domain is fully unlabeled

  • We focus on unsupervised domain adaptation, which generally assumes that the source and target domains

Read more

Summary

Introduction

Due to the improvement of computing power and rich data resources, deep learning has achieved outstanding performance in various applications. Excellent effect of deep learning relies on massive amount of labeled data for supervised training while the collection of labeled data or manual large scale labeled data is very expensive and sometimes difficult in practice. Transfer learning addresses the problem, it aims at building models that perform well on target domain by transferring the knowledge contained in distribution different but related source domain [1]. Domain adaptation is defined as the particular case of transfer learning where the sample and label spaces remain unchanged and only the probability distributions change [2]. Existing domain adaptation methods put source domain data and target domain data together for training that two parties will reveal their own sensitive data to each

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.