Abstract

Generally, collecting a large quantity of unlabeled examples is feasible, but labeling them all is not. Active learning can reduce the number of labeled examples needed to train a good classifier. Existing active learning algorithms can be roughly divided into three categories: single-view single-learner (SVSL) active learning, multiple-view single-learner (MVSL) active learning and single-view multiple-learner (SVML) active learning. In this paper, a new approach that incorporates multiple views and multiple learners (MVML) into active learning is proposed. Multiple artificial neural networks are used as learners in each view, and they are set with different numbers of hidden neurons and weights to ensure each of them has a different bias. The selective sampling of our proposed method is implemented in three different ways. For comparative purpose, the traditional methods MVSL and SVML active learning as well as bagging active learning and adaboost active learning are also implemented together with MVML active learning in our experiments. The empirical results indicate that the MVML active learning outperforms the other traditional methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call