Abstract

Deep supervised learning has achieved great success in image classification. However, most existing methods require high-quality labeled images, which are not easy to obtain. In this paper, we focus on learning with the complementary label specifying classes an image does not belong to, which is easier to obtain. Previous methods produce much lower performance than learning with true labels due to the inherent ambiguity of complementary labels. To deal with the ambiguity of complementary labels, we propose a new complementary learning method called Dual-regularization Complementary Learning (DRCL). Specially, we train two deep neural networks simultaneously and enforce them to regularize the outputs of each other. In this way, the two networks can learn from each other and tend to generate consistent outputs even when supervised by the ambiguous complementary labels. Experiments on four datasets demonstrate the superiority of our approach over state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call