Abstract

In recent years, the convolutional neural network(CNN) based deep learning architectures have achieved great success in medical image segmentation. However, CNN usually relies on abundant labeled data for training. At the same time, collecting labeled training data is time-consuming and expensive. Therefore, in addition to the common unsupervised learning methods, a series of self-supervised learning(SSL) methods have been proposed in medical image analysis using a large amount of unlabeled data. These SSL strategies usually extract potential supervised signals by pretext tasks and help the networks learn a way of feature representation. However, the learned feature representations in the pretext tasks are not commonly directly related to downstream tasks like segmentation. We assume that the more pretext tasks help the model learn the structural features of the image, the better the model's performance on the downstream segmentation task will be. In this paper, we propose an SSL strategy based on max-tree representation to extract the image structure information, and the CNN learns this max-tree representation in the pretext task. To the best of our knowledge, we are the first to take the structure information into account in the SSL pretext task. Extensive experiments show that our SSL strategy based on max-tree representation can help the CNN to learn abundant structural information, which is significantly useful for the downstream segmentation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call