Abstract

Semi-supervised domain adaptation (SSDA) is a promising technique for various applications. It can transfer knowledge learned from a source domain having high-density labeled samples to a target domain having limited labeled samples. Several previous works have attempted to reduce the distribution discrepancy between source domain and target domain by using adversarial-based or entropy-based methods. These works have improved the performance of SSDA. However, there are still lacunae in producing class-wise domain-invariant features, which impair the improvement of the classification accuracy in the target domain. We propose a novel mapping function using explicit class-wise matching that can make a better decision boundary in the embedding space for superior classification accuracy in the target domain. In general, in a target domain with low-density label samples, it is more challenging to create a well-organized distribution for the classification than in a source domain where rich label information is available. In our mapping function, a representative vector of each class in the embedding spaces of the source and target domains is derived and aligned by using class-wise matching. It is observed that the distribution in the embedding space of the source domain can be effectively reproduced in the target domain. Our method achieves outstanding accuracy of classification in the target domain compared with previous works on the Office-31, Office-Home, Visda2017 and DomainNet datasets.

Highlights

  • T RADITIONAL supervised learning approaches are quite effective, but they require sufficient labeled samples to successfully train a model

  • Depending on whether the label information of the target domain can be used for training, the domain adaptation method is categorized into two subgroups: unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)

  • In experiments using the AlexNet backbone, our method reported that the average classification accuracy on the target domain was higher than Bidirectional Adversarial Training (BiAT), up to 7.2% and 7.6%, respectively, in 1shot and 3-shot settings

Read more

Summary

Introduction

T RADITIONAL supervised learning approaches are quite effective, but they require sufficient labeled samples to successfully train a model. Domain adaptation has emerged as a new machine learning strategy in which the model is built using a large amount of labeled data from a source domain and a small amount of labeled data (or even none) from the target domain. The key issue of domain adaptation is how to approximate the joint distribution of the source domain and target domain, i.e., to predict the labels of unlabeled target data with the minimum prediction error. Depending on whether the label information of the target domain can be used for training, the domain adaptation method is categorized into two subgroups: unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call