Abstract

We propose a semi-supervised segmentation method based on multiscale contrastive learning to solve the problem of shortage of annotations in medical image segmentation tasks. We apply perturbations to the input image and encoded features and make the output as consistent as possible by cross-supervision, which is a way to improve the generalizability of the model. Two scales of contrastive learning, patch-level and pixel-level, are employed to enhance the intra-class compactness and inter-class separability of the features. We evaluate the proposed model using three public datasets for brain tumor,left atrial, and cellular nuclei segmentation. The experiments showed that our model outperforms state-of-the-art methods.Clinical relevance- The proposed method can be used for medical image segmentation with limited annotated data and achieve comparable performance to the fully annotated situation. Such an approach can be easily extended to other clinical applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call