Abstract

Semi-supervised learning is a common way that investigates how to improve performance of a visual learning model, while data annotation is far from sufficient. Recent works in semi-supervised deep learning have successfully applied consistency regularization, which encourages a model to maintain consistent predictions for different perturbed versions of an image. However, most of such methods ignore the category correlation of image features, especially when exploiting strong augmentation methods for unlabeled images. To address this problem, we propose PConMatch, a model that leverages a probabilistic contrastive learning framework to separate the features of strongly-augmented versions from different classes. A semi-supervised probabilistic contrastive loss is designed, which takes both labeled and unlabeled samples into account and develops an auxiliary module to generate a probability score to measure the model prediction confidence for each sample. Specifically, PConMatch first generates a pair of weakly-augmented versions for each labeled sample, and produces a weakly-augmented version and a corresponding pair of strongly-augmented versions for each unlabeled sample. Second, a probability score module is proposed to assign pseudo-labeling confidence scores to strongly-augmented unlabeled images. Finally, the probability score of each sample is further passed to the contrastive loss, combining with consistency regularization to enable the model to learn better feature representations. Extensive experiments on four publicly available image classification benchmarks demonstrate that the proposed approach achieves state-of-the-art performance in image classification. Several rigorous ablation studies are conducted to validate the effectiveness of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call