Abstract

Learning classifiers with false positive rate control have drawn intensive attention in applications over past years. While various supervised algorithms have been developed for obtaining low false positive rates, they commonly require the coexistence of both positive and negative samples in data. However, the scenario studied in positive unlabeled (PU) learning is more pervasive in practice. Namely, at inception, most of the data may not have known labels, and the data with known labels may only represent one type of samples. To tackle this challenge, in this paper we propose a new positive unlabeled learning classifier with false positive rate control. In particular, we first prove that in this context employing oft-adopted convex surrogate loss functions, such as the hinge loss function, begets a redundant penalty for false positive rates. Then, we present that the non-convex ramp loss surrogate function can overcome this barrier and show a concave-convex procedure can solve the associated non-convex optimization problem. Finally, we demonstrate the effectiveness of the proposed method through extensive experiments on multiple datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.