Abstract
In a semi-supervised setting, direct training of a deep discriminative model on partially labeled images often suffers from overfitting and poor performance, because only a small number of labeled images are available, and errors in label propagation are, in many cases, inevitable. In this paper, we introduce an auxiliary clustering task to explore the structure of the image data, and judiciously weigh unlabeled data to alleviate the influence of ambiguous data on model training. For this purpose, we propose a cross-task network composed of two streams to jointly learn two tasks: classification and clustering. Based on the model predictions, a large number of pairwise constraints can be generated from unlabeled images, and are fed to the clustering stream. Since pairwise constraints encode weak supervision information, the clustering is tolerant of errors in labeling. Unlabeled images are weighted according to the distances to the clusters discovered, and a better discriminative model is trained on the classification stream associated with a weighted softmax loss. Furthermore, a self-paced learning paradigm is adopted to gradually train our deep model from easy examples to difficult ones. Experimental results on widely used image classification datasets confirm the effectiveness and superiority of the proposed approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.