Abstract

Multi-source online transfer learning uses the tagged data from multiple source domains to enhance the classification performance of the target domain. For unbalanced data sets, a multi-source online transfer learning algorithm that can oversample in the feature spaces of the source domain and the target domain is proposed. The algorithm consists of two parts: oversampling multiple source domains and oversampling online target domains. In the oversampling phase of the source domain, oversampling is performed in the feature space of the support vector machine (SVM) to generate minority samples. New samples are obtained by amplifying the original Gram matrix through neighborhood information in the source domain feature space. In the oversampling phase of the online target domain, minority samples from the current batch search for k-nearest neighbors in the feature space from multiple batches that have already arrived, and use the generated new samples and the original samples in the current batch to train the target domain function together. The samples from the source domain and the target domain are mapped to the same feature space through the kernel function for oversampling, and the corresponding decision function is trained using the data from the source domain and the target domain with relatively balanced class distribution, so as to improve the overall performance of the algorithm. Comprehensive experiments were conducted on four real datasets, and compared to other baseline algorithms on the Office Home dataset, the accuracy improved by 0.0311 and the G-mean value improved by 0.0702.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call