In the field of Machine Learning, it is widely acknowledged that training and test data should ideally stem from the same source and distribution. However, in the real world, this is not always feasible. Domain Adaptation (DA) techniques address this issue by adapting a classifier trained on annotated source domain data on unannotated target domain data while minimizing the impact of domain shift. Many recent DA approaches concentrate on learning a latent feature space that remains invariant to domain shift by mitigating various statistical and geometrical divergences. Although these methods have demonstrated effectiveness, they frequently overlook a crucial aspect of learning a domain-invariant discriminative latent feature space across diverse domains. To address this issue, we propose a novel framework called Unified Framework for Visual Domain Adaptation with Covariance Matching (UDACM) that aims to learn a domain invariant and class discriminative latent feature space by incorporating multiple objectives such as maximizing target domain variance, minimizing distribution and subspace divergence, performing manifold learning, preserving discriminative information of both the source and target domains, and measuring and matching covariance simultaneously. In our proposed framework, the measuring and matching of covariance play a crucial role in ensuring the learning of discriminative latent feature space by aligning the within-class and between-class covariance matrices of the source and target domains simultaneously. Detailed experiments demonstrate the outperformance of UDACM on baseline datasets such as CMU-PIE, Office+Caltech10, USPS+MNIST, VLCS, and Office–Home, compared to various established primitive, shallow, and deep domain adaptation methods on several image classification tasks.
Read full abstract