Semi-supervised learning methods aim to address the scarcity of pixel-level annotations in medical image segmentation. Previous approaches typically rely on filtering strategies to obtain pseudo-labels or implement consistency constraints for unlabeled images, which cannot robustly identify regions-of-interest (ROIs) with intricate semantic information. However, this often leads to uncertain predictions in semantically ambiguous areas and results in subpar segmentation outcomes. We observe that this issue stems from the dispersion of the representations of uncertain predictions among the boundaries of the semantically clustered representations of certain predictions. To this end, we propose a novel Uncertainty-Aware Representation Calibration (UA-RC) framework. This framework leverages an efficient uncertainty-aware criterion within a teacher–student SSL architecture to identify the representations of uncertain predictions. Then, UA-RC calibrates them via a semantic contrast paradigm by constructing positive prototypes and negative representations from certain predictions. Furthermore, UA-RC incorporates class-wise memory banks to store massive, diverse representations from training data, facilitating the calibration process and allowing for a better disentanglement of representations related to ROIs and backgrounds. Extensive experiments on four datasets, including Kvasir-SEG, ISIC-2018, BUL-2020, and ACDC, demonstrate the competitive edge of UA-RC over existing alternatives. Codes is available at https://github.com/Wu0409/UARC.
Read full abstract