Abstract

Deep Neural Networks have provided excellent performance in a variety of applications. However, their superior performance comes with a huge expense of collecting a correctly-annotated large-scale training set, and a little impracticality of preparing such a training set in many applications. In this work, we study another natural type of weak supervision, complementary-label learning, to address this problem. Complementary-label learning refers to train the Deep Neural Networks by the usage of only complementary labels, and a complementary label indicates one of the classes that the sample does not belong to. This paper first presents a general risk formulation for complementary label learning through an adoption of arbitrary losses designed for ordinary-label learning. We then theoretically analyze that our method is applicable for any loss functions to learn deep neural networks with complementary labels in the framework of risk minimization. Experimental results on different benchmark datasets demonstrate that our approach outperforms current state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call