Abstract

AbstractVarious instance weighting methods have been proposed for instance-based transfer learning. Kernel Mean Matching (KMM) is one of the typical instance weighting approaches which estimates the instance importance by matching the two distributions in the universal reproducing kernel Hilbert space (RKHS). However, KMM is an unsupervised learning approach which does not utilize the class label knowledge of the source data. In this paper, we extended KMM by leveraging the class label knowledge and integrated KMM and SVM into an unified optimization framework called KMM-LM (Large Margin). The objective of KMM-LM is to maximize the geometric soft margin, and minimize the empirical classification error together with the domain discrepancy based on KMM simultaneously. KMM-LM utilizes an iterative minimization algorithm to find the optimal weight vector of the classification decision hyperplane and the importance weight vector of the instances in the source domain. The experiments show that KMM-LM outperforms the state-of-the-art baselines.KeywordsTransfer learningInstance weightingKernel mean matchingLarge margin

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.