Abstract

An automatic skin lesion segmentation algorithm not only facilitates the dermatologist’s workload on skin cancer analysis but also provides a platform for early cancer prediction. Over the years, several deep learning methods have been proposed to address the skin lesion segmentation problem. However, training deep models usually requires a large-scale annotated dataset, which is not feasible in the medical domain due to the annotation burden. In addition, the low data regime highly increases the overfitting potential for the neural network. To address these limitations in an end-to-end manner, we propose to incorporate unlabelled samples during the training process. Our network offers a semi-supervised training schema, wherein the first stage performs a supervised training strategy to learn semantic segmentation map while the second step focuses on the unsupervised technique to enrich the encoder module. Specifically, unlike the literature work on skin lesion segmentation, we design a surrogate task on top of the convolutional and Transformer representations to learn data-driven features from the image itself to alleviate the requirement of the large annotated dataset. The effectiveness of the proposed method is demonstrated using three different skin lesion segmentation datasets, namely ISIC 2018 (dice score 0.905), ISIC 2017 (dice score 0.898) and PH2 (dice score 0.940). Particularly we observed that including the unsupervised samples can increase the dice score by 2%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.