Abstract

Complementary label learning (CLL) is an important problem that aims to reduce the cost of obtaining large-scale accurate datasets by only allowing each training sample to be equipped with labels the sample does not belong. Despite its promise, CLL remains a challenging task. Previous methods have proposed new loss functions or introduced deep learning-based models to CLL, but they mostly overlook the semantic information that may be implicit in the complementary labels. In this work, we propose a novel method, ComCo, which leverages a contrastive learning framework to assist CLL. Our method includes two key strategies: a positive selection strategy that identifies reliable positive samples and a negative selection strategy that skillfully integrates and leverages the information in the complementary labels to construct a negative set. These strategies bring ComCo closer to supervised contrastive learning. Empirically, ComCo significantly achieves better representation learning and outperforms the baseline models and the current state-of-the-art by up to 14.61% in CLL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call