Abstract

Domain adaptation learning (DAL) methods have shown promising results by utilizing labeled samples from the source (or auxiliary) domain(s) to learn a robust classifier for the target domain which has a few or even no labeled samples. However, there exist several key issues which need to be addressed in the state-of-theart DAL methods such as sufficient and effective distribution discrepancy metric learning, effective kernel space learning, and multiple source domains transfer learning, etc. Aiming at the mentioned-above issues, in this paper, we propose a unified kernel learning framework for domain adaptation learning and its effective extension based on multiple kernel learning (MKL) schema, regularized by the proposed new minimum distribution distance metric criterion which minimizes both the distribution mean discrepancy and the distribution scatter discrepancy between source and target domains, into which many existing kernel methods (like support vector machine (SVM), v-SVM, and least-square SVM) can be readily incorporated. Our framework, referred to as kernel learning for domain adaptation learning (KLDAL), simultaneously learns an optimal kernel space and a robust classifier by minimizing both the structural risk functional and the distribution discrepancy between different domains. Moreover, we extend the framework KLDAL to multiple kernel learning framework referred to as MKLDAL. Under the KLDAL or MKLDAL framework, we also propose three effective formulations called KLDAL-SVM or MKLDAL-SVM with respect to SVM and its variant µ-KLDALSVM or µ-MKLDALSVM with respect to v-SVM, and KLDAL-LSSVM or MKLDAL-LSSVM with respect to the least-square SVM, respectively. Comprehensive experiments on real-world data sets verify the outperformed or comparable effectiveness of the proposed frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call