Abstract

Active learning methods seek to reduce the number of labeled instances needed to train an effective classifier. Most current methods are myopic, i.e. select a single unlabelled sample to label at a time. The batch-mode active learning methods, on the other hand, typically select top N unlabeled samples with maximum score. Such selected samples often cannot guarantee the learner's performance. In this paper, a non-myopic active learning algorithm is presented based on mutual information. Our algorithm selects a set of samples at each iteration, and the objective function of the algorithm is proved to be submodular, which guarantees to find the near-optimal solution. Our experimental results on UCI data sets show that the proposed algorithm outperforms myopic active learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call