Abstract

In the fields of computer vision and machine learning, domain adaptation has been extensively studied and the main challenge in the case is how to transform the existing classifier(s) into an effective adaptive classifier to exploit the latent information in the new data source which typically has a different distribution compared with the original data source. Currently, the Adaptive Support Vector Machines (A-SVM) has been proposed to deal with the domain adaptation problem, which is an effective strategy. However, the resulting optimization task by minimizing a convex quadratic function in A-SVM can not effectively minimize the distance between a source and a target domain as much as possible and typically has high computational complexity. In order to handle these problems, in this paper, we extend the A-SVM by determining a pair of nonparallel up- and down-bound functions solved by two smaller sized quadratic programming problems (QPPs) to achieve a faster learning speed. Notably, our method yields two nonparallel separating hyperplanes to exploit the latent discriminant information based on SVM classification mechanism, which can naturally enhance the classification performance. This method is named as Adaptive Twin Support Vector Machine Learning (A-TSVM). Moreover, we consider a high-level learning paradigm with privilege information (LUPI) to learn a induced model that further constrains the solution in the target space. The learned model is named as domain Adaptive Twin Support Vector Machine Learning Using Privileged Information (A-TSVM+). Finally, a series of comparative experiments with many other methods are performed on three datasets. The experimental results effectively indicate that the proposed method can not only greatly improve the accuracy of classification, but also save computing time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call