Abstract

<p>Most machine learning methods assume the training and test sets to be independent and have identical distributions. However, this assumption does not always hold true in practical applications. Direct training usually induces poor performance if the training and test data have distribution shifts. To address this issue, a three-part model based on using a feature extractor, a classifier, and several domain discriminators is adopted herein. This unsupervised domain adaptation model is based on multiple adversarial learning with samples of different importance. A deep neural network is used for supervised classification learning of the source domain. Numerous adversarial networks are used to constitute the domain discriminators to align each category in the source and target domains and effectively transfer knowledge from the source domain to the target domain. Triplet loss functions—classification loss, label credibility loss, and discrimination loss—are presented to further optimize the model parameters. First, the label similarity metric is designed between the target and source domains data. Second, a credibility loss function is proposed to obtain an accurate label for the unlabeled data of the target domain under training iterations. Finally, a discrimination loss is designed for multiple adversarial domain discriminators to fully utilize the unlabeled data in the target domain during training. The discrimination loss function uses predicted label probabilities as dynamic weights for the train data. The proposed method is compared with mainstream domain adaptation approaches on four public datasets: Office-31, MNIST, USPS, and SVHN. Experimental results show that the proposed method can perform well in the target domain and improve generalization performance of the model.</p> <p> </p>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.