Abstract

Most feature-based Unsupervised Domain Adaptation (UDA) methods aligned distributions of the source and target domains by minimizing Maximum Mean Discrepancy (MMD) between the two domains. However, MMD using mean values may misalign distributions of the two domains due to outliers. Besides, to enhance the identifiability of learned features, some feature-based UDA methods adopted samples-based distances to measure the intra-class compactness and inter-class separability. But, the number of samples-based distances is a quadratic function of the sample size, feature-based UDA methods using samples-based distances may lead to the inefficient computation of measuring the intra-class compactness and inter-class separability. To overcome the two problems, we propose Discriminative Transfer Feature Learning based on Robust-Centers (DTFLRC) for UDA. First, we design robust-class-centers and robust-domain-centers for decreasing the influence of outliers and establish MMD with robust-centers to align distributions of the two domains. Second, noticing that the number of centers is far smaller than the sample size, we construct three robust-centers-based distances to effectively reduce the number of distances in measuring the intra-class compactness and inter-class separability. Specifically, three robust-centers-based distances include Sample-Class-center, Class-center-Domain-center, and Class-center-Nearest-neighbor-class-center distances, where Sample-Class-center distance measures the intra-class compactness, and Class-center-Domain-center and Class-center-Nearest-neighbor-class-center distances jointly reflect the inter-class separability. Then, the optimization objective of DTFLRC is established to minimize the MMD with robust-centers and Sample-Class-center distance, and maximize Class-center-Domain-center and Class-center-Nearest-neighbor-class-center distances. Finally, experimental results demonstrate that DTFLRC outperforms state-of-the-art methods, where the accuracies of DTFLRC on datasets CMU-PIE, Office-Caltech, ImageCLEF-DA, and VisDA-2017 are 81.8%, 94.6%, 90.2%, and 80.5%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.