Abstract

Deep convolutional networks have become ubiquitous for image segmentation. But relatively little is known on how to leverage unlabeled data to further improve their performance. Here we introduce a new approach named as Self-supervised deep learning for Segmentation (SeSe-Net) that uses a ‘Worker’ (W) neural network to segment input images and a ‘Supervisor’ (S) neural network to evaluate the quality of the segmentation results. We further propose a two-stage training process. In the first stage, W learns to segment with standard dataset and generates training sets for S. Based on this, S learns how well the W performs for the segmentation task. In the second stage, S transfers the knowledge from the first stage to supervise the learning process of W on unlabeled datasets towards better segmentation performance. We show that SeSe-Net leads to a significant performance boosting with additional unlabeled data. Furthermore, with less than 5% of labeled training dataset, SeSe-Net performs on par with challenging baseline trained on full training dataset in terms of Dice Metric (DM) on three different datasets. Theoretical analysis is also given to establish the convergence of SeSe-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call