Abstract

This paper addresses the problem of active learning on multiple tasks, where labeled data are expensive to obtain for each individual task but the learning problems share some commonalities across multiple related tasks. To leverage the benefits of jointly learning from multiple related tasks and making active queries, we propose a novel active multitask learning approach based on trace norm regularized least squares. The basic idea is to induce an optimal classifier which has the lowest risk and at the same time which is closest to the true hypothesis. Toward this aim, we devise a new active selection criterion that takes into account not only the risk but also the excess risk, which measures the distance to the true hypothesis. Based on this criterion, our proposed algorithm actively selects the instance to query for its label based on the combination of the two risks. Experiments on both synthetic and real-world datasets show that our proposed algorithm provides superior performance as compared to other state-of-the-art active learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call