Abstract

We propose a method of selecting initial training examples for active learning so that it can reach high performance faster with fewer further queries. Our method divides the unlabeled examples into clusters of similar ones and then selects from each cluster the most representative example which is the one closest to the cluster’s centroid. These representative examples are labeled by the user and become the members of the initial training set. We also promote inclusion of what we call model examples in the initial training set. Although the model examples which are in fact the centroids of the clusters are not real examples, their contribution to enhancement of classification accuracy is significant because they represent a group of similar examples so well. Experiments with various text data sets have shown that the active learner starting from the initial training set selected by our method reaches higher accuracy faster than that starting from randomly generated initial training set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call