Abstract

Semi-supervised classification methods try to improve a supervised learned classifier with the help of unlabeled data. In many cases one assumes a certain structure on the data, as for example the manifold assumption, the smoothness assumption or the cluster assumption. Self-training is a method that does not need any assumptions on the data itself. The idea is to use the supervised trained classifier to label the unlabeled points and to enlarge this way the training data. This paper aims to show that a self-training approach with soft-labeling is preferable in many cases in terms of expected loss (risk) minimization. The main idea is to use a soft-labeling to minimize the risk on labeled and unlabeled data together, in which the hard-labeled self-training is an extreme case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call