Abstract

In many practical machine learning problems, the acquisition of labeled data is often expensive and time consuming. To reduce this labeling cost, active learning has been introduced in many scientific fields. This study considers the problem of active learning of a regression model in the context of an optimal experimental design. Classical optimal experimental design approaches are based on the least square errors of labeled samples. Recently, a couple of active learning approaches that take advantage of both labeled and unlabeled data have been developed based on Laplacian regularized regression models with a single criterion. However, these approaches are susceptible to selecting undesirable samples when the number of initially labeled samples is small. To address this susceptibility, this study proposes an active learning method that considers multiple complementary criteria. These criteria include sample representativeness, diversity information, and variance reduction of the Laplacian regularization model. Specifically, we developed novel density and diversity criteria based on a clustering algorithm to identify the samples that are representative of their distributions, while minimizing their redundancy. Experiments were conducted on synthetic and benchmark data to compare the performance of the proposed method with that of existing methods. Experimental results demonstrate that the proposed active learning algorithm outperforms its existing counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call