Abstract

This paper deals with the problem of visual domain adaptation in which source domain labeled data is available for training, but the target domain unlabeled data is available for testing. Many recent domain adaptation methods merely concentrate on extracting domain-invariant features via minimizing the distributional and geometrical divergence between domains simultaneously while ignoring within-class and between-class structure properties, especially for the target domain due to the unavailability of labeled data. We propose Linear Discriminant Analysis via Pseudo Labels (LDAPL), a unified framework for visual domain adaptation that can tackle these two issues together. LDAPL is to learn a domain-invariant features across both domains with preserving important properties such as minimizing the shift between domains both statistically and geometrically, retaining the original similarity of data samples, maximizing the target domain variance, and minimizing the within-class and maximizing the between-class properties of both domains. Specifically, LDAPL preserves the target domain discriminative information (or within-class and between-class properties) using pseudo labels, and these pseudo labels are refined until convergence. Extensive experiment on several visual cross-domain benchmarks, including Office+Caltech10 with all three types of features (such as Speeded Up Robust Features (SURF), Deep Convolutional Activation Feature (DeCAF <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">6</sub> ), and Visual Geometry Group-Fully Connected layer ( VGG-FC <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">6</sub> ) features), COIL20 (Columbia Object Image Library), digit, and PIE (Pose, Illumination, and Expression), LDAPL achieved average accuracies of 79.11%, 99.72%, 79.0%, and 84.50%, respectively. Comparative results on several visual cross-domain classification tasks verify that LDAPL can significantly outperform the state-of-the-art primitive and domain adaptation methods. Specifically, LDAPL gains over baseline Joint Geometrical and Statistical Alignment (JGSA) method with 6.6%, 5.3%, 6.3%, and 44.93% average accuracies on Office+Caltech10 (SURF, DeCAF <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">6</sub> , and VGG-FC6 ), COIL20, digit, and PIE, respectively.

Highlights

  • M ACHINE learning (ML) is one of the most compelling recent technology which attempts to imitate how the human brain learns

  • The Joint Geometrical and Statistical Alignment (JGSA) method considers four objectives (IV-A1, IV-A3, IV-A5, and IV-A6 ), its average performance is reduced by 8.9% compared to Linear Discriminant Analysis via Pseudo Labels (LDAPL) and 2.5% compared to the Domain-irrelevant class clustering (DICE) method due to lack of objectives ( IV-A2 and IV-A4)

  • We observe that LDAPL improves 25.96% over the best method Principal Component Analysis (PCA)

Read more

Summary

Introduction

M ACHINE learning (ML) is one of the most compelling recent technology which attempts to imitate how the human brain learns. The ML algorithm intends to discover and exploit the hidden patterns accessible in the training data (or source domain data). Those patterns can be used to identify new or unknown patterns in the test data (or the target domain data). The primary constraint with the primitive ML algorithms is that both training and test data must follow the same distribution, but this is not the case for real-world applications. They are incapable of withstanding any shifts from the training data to the test data

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call