Abstract

The performance of the supervised learning algorithms such as k-nearest neighbor (k-NN) depends on the labeled data. For some applications (Target Domain), obtaining such labeled data is very expensive and labor-intensive. In a real-world scenario, the possibility of some other related application (Source Domain) is always accompanied by sufficiently labeled data. However, there is a distribution discrepancy between the source domain and the target domain application data as the background of collecting both the domains data is different. Therefore, source domain application with sufficient labeled data cannot be directly utilized for training the target domain classifier. Domain Adaptation (DA) or Transfer learning (TL) provides a way to transfer knowledge from source domain application to target domain application. Existing DA methods may not perform well when there is a much discrepancy between the source and the target domain data, and the data is non-linear separable. Therefore, in this paper, we provide a Kernelized Unified Framework for Domain Adaptation (KUFDA) that minimizes the discrepancy between both the domains on linear or non-linear data-sets and aligns them both geometrically and statistically. The substantial experiments verify that the proposed framework outperforms state-of-the-art Domain Adaptation and the primitive methods (Non- Domain Adaptation) on real-world Office-Caltech and PIE Face data-sets. Our proposed approach (KUFDA) achieved mean accuracies of 86.83% and 74.42% for all possible tasks of Office-Caltech with VGG-Net features and PIE Face data-sets.

Highlights

  • In the theory of traditional machine learning [1] it is believed that the data of training and testing falls under the same distribution

  • While finding the projection vector, our proposed method follows the following objectives or terms: 1) the target domain data variance is maximized to reduce the chances of loss of data properties, 2) the marginal and conditional distribution divergences across the domain is minimized to reduce domain shift statistically, 3) In this approach, we impose an additional geometrical constraints on the solution that involves modelling the non-linear manifold geometry of the source and target domain, where two graph kernels based on VOLUME 7, 2019

  • In this paper, we introduced a Kernelized Unified Framework for Domain Adaptation (KUFDA) framework that eliminates the shortcoming of existing non-Dominion Adaptation and the Domain Adaptation approaches by incorporating all the terms or objectives necessary to reduce the discrepancy between the source and target domains

Read more

Summary

INTRODUCTION

In the theory of traditional machine learning [1] it is believed that the data of training and testing falls under the same distribution. Existing DA methods may lack one or more objectives such as 1) maximizing the target domain variance, 2) preserving the marginal and conditional distribution with Maximum Mean Discrepancy (MMD) criterion, 3) preserving original similarity of data samples, 4) subspace alignment, 5) preserving source domain discriminative information, and 6) kernelization of their respective framework for dealing non-linear separable samples, needed to reduce the deviation between both the domains. While finding the projection vector, our proposed method follows the following objectives or terms: 1) the target domain data variance is maximized to reduce the chances of loss of data properties, 2) the marginal and conditional distribution divergences across the domain is minimized to reduce domain shift statistically, 3) In this approach, we impose an additional geometrical constraints on the solution that involves modelling the non-linear manifold geometry of the source and target domain, where two graph kernels based on VOLUME 7, 2019.

F T Rt F
OBJECTIVE FUNCTION
OPTIMIZATION OF OBJECTIVE FUNCTION
EXPERIMENTS
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.