Abstract
Semi-supervised learning (SSL) aims to reduce reliance on labeled data. Achieving high performance often requires more complex algorithms, therefore, generic SSL algorithms are less effective when it comes to image classification tasks. In this study, we propose ComMatch, a simpler and more effective algorithm that combines negative learning, dynamic thresholding, and predictive stability discriminations into the consistency regularization approach. The introduction of negative learning is to help facilitate training by selecting negative pseudo-labels during stages when the network has low confidence. And ComMatch filters positive and negative pseudo-labels more accurately as training progresses by dynamic thresholds. Since high confidence does not always mean high accuracy due to network calibration issues, we also introduce network predictive stability, which filters out samples by comparing the standard deviation of the network output with a set threshold, thus largely reducing the influence of noise in the training process. ComMatch significantly outperforms existing algorithms over several datasets, especially when there is less labeled data available. For example, ComMatch achieves 1.82% and 3.6% error rate reduction over FlexMatch and FixMatch on CIFAR-10 with 40 labels respectively. And with 4000 labeled samples, ComMatch achieves 0.54% and 2.65% lower error rates than FixMatch and MixMatch, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.