Abstract

Partial label learning aims to learn from training examples each associated with a set of candidate labels, among which only one label is valid for the training example. The basic strategy to learn from partial label examples is disambiguation, i.e. by trying to recover the ground-truth labeling information from the candidate label set. As one of the popular machine learning paradigms, maximum margin techniques have been employed to solve the partial label learning problem. Existing attempts perform disambiguation by optimizing the margin between the maximum modeling output from candidate labels and that from non-candidate ones. Nonetheless, this formulation ignores considering the margin between the ground-truth label and other candidate labels. In this paper, a new maximum margin formulation for partial label learning is proposed which directly optimizes the margin between the ground-truth label and all other labels. Specifically, the predictive model is learned via an alternating optimization procedure which coordinates the task of ground-truth label identification and margin maximization iteratively. Extensive experiments on artificial as well as real-world datasets show that the proposed approach is highly competitive to other well-established partial label learning approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call