Abstract

Multi-source domain adaptation (MSDA) aims to transfer knowledge from multi-source domains to one target domain. Inspired by single-source domain adaptation, existing methods solve MSDA by aligning the data distributions between the target domain and each source domain. However, aligning the target domain with the dissimilar source domain would harm the representation learning. To address the above issue, an intuitive motivation of MSDA is using the attention mechanism to enhance the positive effects of the similar domains, and suppress the negative effects of the dissimilar domains. Therefore, we propose Attention-Based Multi-Source Domain Adaptation (ABMSDA) by considering the domain correlations to alleviate the effects caused by dissimilar domains. To obtain the domain correlations between source and target domains, ABMSDA firstly trains a domain recognition model to calculate the probability that the target images belong to each source domain. Based on the domain correlations, Weighted Moment Distance (WMD) is proposed to pay more attention on the source domains with higher similarities. Furthermore, Attentive Classification Loss (ACL) is developed to constrain that the feature extractor can generate the alignment and discriminative visual representations. The evaluations on two benchmarks demonstrate the effectiveness of the proposed model, e.g., an average of 6.1% improvement on the challenging DomainNet dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call