Abstract

High quality Data and its derivative machine learning (ML) models are gradually becoming commercial commodity and provided to work effectively in an increasing number of areas. These ML model providers possess a set of trained models along with immense amount of source data stored on their servers. In order to obtain specific models, providers will require consumers to upload their domain-specialized data and conduct domain adaptation at the server side. However, considering the protection of the private information reflected by consumers’ training data, as well as maintaining the commercial competitiveness of ML service, it is best that there is no data exchange between servers and consumers. Besides, consumers’ data is always lack of supervision, i.e., classification labels, thus we are searching for how to conduct unsupervised domain adaptation (UDA) with no data exchange among domains in this work. We are the first to propose a novel memory cache based adversarial training (AT) strategy for UDA at the target side without the source data (the existence of source data is an essential requirement for regular AT). And our method includes a multiple pseudo labelling operation which is more accurate and robust than single pseudo labelling. The AT and multiple labelling work collaboratively to extract shared features among domains and adapt the learning model more specific to target domain. We carry out extensive evaluation experiments with a number of data sets and baselines, and according to the results, our proposed method can perform very well and exceed the state-of-art performance on all tasks. In the end, we also discuss how to extend our method to partial and open-set domain adaptation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call