For cross-domain pattern classification, the supervised information (i.e., labeled patterns) in the source domain is often employed to help classify the unlabeled target domain patterns. In practice, multiple target domains are usually available. The unlabeled patterns (in different target domains) which have high-confidence predictions, can also provide some pseudo-supervised information for the downstream classification task. The performance in each target domain would be further improved if the pseudo-supervised information in different target domains can be effectively used. To this end, we propose an evidential multi-target domain adaptation (EMDA) method to take full advantage of the useful information in the single-source and multiple target domains. In EMDA, we first align distributions of the source and target domains by reducing maximum mean discrepancy (MMD) and covariance difference across domains. After that, we use the classifier learned by the labeled source domain data to classify query patterns in the target domains. The query patterns with high-confidence predictions are then selected to train a new classifier for yielding an extra piece of soft classification results of query patterns. The two pieces of soft classification results are then combined by evidence theory. In practice, their reliabilities/weights are usually diverse, and an equal treatment of them often yields the unreliable combination result. Thus, we propose to use the distribution discrepancy across domains to estimate their weighting factors, and discount them before fusing. The evidential combination of the two pieces of discounted soft classification results is employed to make the final class decision. The effectiveness of EMDA was verified by comparing with many advanced domain adaptation methods on several cross-domain pattern classification benchmark datasets.
Read full abstract