Abstract

In the field of Machine Learning, it is widely acknowledged that training and test data should ideally stem from the same source and distribution. However, in the real world, this is not always feasible. Domain Adaptation (DA) techniques address this issue by adapting a classifier trained on annotated source domain data on unannotated target domain data while minimizing the impact of domain shift. Many recent DA approaches concentrate on learning a latent feature space that remains invariant to domain shift by mitigating various statistical and geometrical divergences. Although these methods have demonstrated effectiveness, they frequently overlook a crucial aspect of learning a domain-invariant discriminative latent feature space across diverse domains. To address this issue, we propose a novel framework called Unified Framework for Visual Domain Adaptation with Covariance Matching (UDACM) that aims to learn a domain invariant and class discriminative latent feature space by incorporating multiple objectives such as maximizing target domain variance, minimizing distribution and subspace divergence, performing manifold learning, preserving discriminative information of both the source and target domains, and measuring and matching covariance simultaneously. In our proposed framework, the measuring and matching of covariance play a crucial role in ensuring the learning of discriminative latent feature space by aligning the within-class and between-class covariance matrices of the source and target domains simultaneously. Detailed experiments demonstrate the outperformance of UDACM on baseline datasets such as CMU-PIE, Office+Caltech10, USPS+MNIST, VLCS, and Office–Home, compared to various established primitive, shallow, and deep domain adaptation methods on several image classification tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.