Abstract
In this paper, we tackle the problem called learning from label proportions (LLP), where the training data is arranged into various bags, with only the proportions of different categories in each bag available. Existing efforts mainly focus on training a model with only the limited proportion information in a weakly supervised manner, thus result in apparent performance gap to supervised learning, as well as computational inefficiency. In this work, we propose a multi-task pipeline called SELF-LLP to make full use of the information contained in the data and model themselves. Specifically, to intensively learn representation from the data, we leverage the self-supervised learning as a plug-in auxiliary task to learn better transferable visual representation. The main insight is to benefit from the self-supervised representation learning with deep model, as well as improving classification performance by a large margin. Meanwhile, in order to better leverage the implicit benefits from the model itself, we incorporate the self-ensemble strategy to guide the training process with an auxiliary supervision information, which is constructed by aggregating multiple previous network predictions. Furthermore, a ramp-up mechanism is further employed to stabilize the training process. In the extensive experiments, our method demonstrates compelling advantages in both accuracy and efficiency over several state-of-the-art LLP approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.