Abstract
Complementary Label Learning (CLL) is a typical weakly supervised learning protocol, where each instance is associated with one complementary label to specify a class that the instance does not belong to. Current CLL approaches assume that complementary labels are uniformly sampled from all non-ground-truth labels, so as to implicitly and locally share complementary labels by solely reducing the logit of complementary label in one way or another. In this paper, we point out that, when the uniform assumption does not hold, existing CLL methods are weakened their ability to share complementary labels and fail in creating classifiers with large logit margin (LM), resulting in a significant performance drop. To address these issues, we instead present complementary logit margin (CLM) and empirically prove that increasing CLM contributes to the share of complementary labels under the biased CLL setting. Accordingly, we propose a surrogate complementary one-versus-rest loss (COVR) and demonstrate that optimization on COVR can effectively increase CLM with both theoretical and empirical evidences. Extensive experiments verify that the proposed COVR exhibits substantial improvement for both the biased CLL and even a more practical CLL setting: instance-dependent complementary label learning.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.