Abstract

Though semi-supervised classification learning has attracted great attention over past decades, semi-supervised classification methods may show worse performance than their supervised counterparts in some cases, consequently reducing their confidence in real applications. Naturally, it is desired to develop a safe semi-supervised classification method that never performs worse than the supervised counterparts. However, to the best of our knowledge, few researches have been devoted to safe semi-supervised classification. To address this problem, in this paper, we invent a safety-control mechanism for safe semi-supervised classification by adaptive tradeoff between semi-supervised and supervised classification in terms of unlabeled data. In implementation, based on our recent semi-supervised classification method based on class memberships (SSCCM), we develop a safety-aware SSCCM (SA-SSCCM). SA-SSCCM, on the one hand, exploits the unlabeled data to help learning (as SSCCM does) under the assumption that unlabeled data can help learning, and on the other hand, restricts its prediction to approach that of its supervised counterpart least-square support vector machine (LS-SVM) under the assumption that unlabeled data can hurt learning. Therefore, prediction by SA-SSCCM becomes a tradeoff between those by semi-supervised SSCCM and supervised LS-SVM, respectively, in terms of the unlabeled data. As in SSCCM, the optimization problem in SA-SSCCM can be efficiently solved by the alternating iterative strategy, and the iteration convergence can theoretically be guaranteed. Experiments over several real datasets show the promising performance of SA-SSCCM compared with LS-SVM, SSCCM, and off-the-shelf safe semi-supervised classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call