Abstract

Active learning selectively labels the most informative instances, aiming to reduce the cost of data annotation. While much effort has been devoted to active sampling functions, relatively limited attention has been paid to when the learning process should stop. In this article, we focus on the stopping criterion of active learning and propose a model stability--based criterion, that is, when a model does not change with inclusion of additional training instances. The challenge lies in how to measure the model change without labeling additional instances and training new models. Inspired by the stochastic gradient update rule, we use the gradient of the loss function at each candidate example to measure its effect on model change. We propose to stop active learning when the model change brought by any of the remaining unlabeled examples is lower than a given threshold. We apply the proposed stopping criterion to two popular classifiers: logistic regression (LR) and support vector machines (SVMs). In addition, we theoretically analyze the stability and generalization ability of the model obtained by our stopping criterion. Substantial experiments on various UCI benchmark datasets and ImageNet datasets have demonstrated that the proposed approach is highly effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call