Positive-Unlabeled (PU) data arise frequently in a wide range of fields such as medical diagnosis, anomaly analysis and personalized advertising. The absence of any known negative labels makes it very challenging to learn binary classifiers from such data. Many state-of-the-art methods reformulate the original classification risk with individual risks over positive and unlabeled data, and explicitly minimize the risk of classifying unlabeled data as negative. This, however, usually leads to classifiers with a bias toward negative predictions, i.e., they tend to recognize most unlabeled data as negative. In this paper, we propose a label distribution alignment formulation for PU learning to alleviate this issue. Specifically, we align the distribution of predicted labels with the ground-truth, which is constant for a given class prior. In this way, the proportion of samples predicted as negative is explicitly controlled from a global perspective, and thus the bias toward negative predictions could be intrinsically eliminated. On top of this, we further introduce the idea of functional margins to enhance the model's discriminability, and derive a margin-based learning framework named Positive-Unlabeled learning with Label Distribution Alignment (PULDA). This framework is also combined with the class prior estimation process for practical scenarios, and theoretically supported by a generalization analysis. Moreover, a stochastic mini-batch optimization algorithm based on the exponential moving average strategy is tailored for this problem with a convergence guarantee. Finally, comprehensive empirical results demonstrate the effectiveness of the proposed method.
Read full abstract