Abstract

Recently, how to handle the situation where only a few samples in the dataset are labeled has become a hot academic topic. Semi-Supervised Learning (SSL) has shown its great capacity and potential in this topic. However, existing methods tend to focus more on the relationship between unlabeled samples and labeled samples or focus on unlabeled samples' information while rarely exploring the hidden information between unlabeled data of the same category. To address this shortcoming, we use an intra-class similarity measure to exploit the information between unlabeled samples of the same class and, on this basis, introduce a new intra-class similarity loss term. In addition, to improve the accuracy of pseudo-labels in deep semi-supervised learning, we also propose an adaptive expansion of the Label Propagation algorithm. The proposed method outperforms many state-of-the-art results in CIFAR-10, CIFAR-100 and Mini-ImageNet. The experimental results show that adding the intra-class similarity loss term and the adaptive extension improvement to the deep semi-supervised learning model can effectively improve the model's performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.