Self-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use in the medical domain, where there is an underlying anatomical structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervised multi-task representation learning framework for sequential 2D medical images, which explicitly aims to exploit the underlying structures via multiple pretext tasks. Unlike the current state-of-the-art methods, which are designed to only pre-train the encoder for instance discrimination tasks, the proposed framework can pre-train the encoder and the decoder at the same time for dense prediction tasks. We evaluate the representations extracted by the proposed framework on two public whole heart segmentation datasets from different domains. The experimental results show that our proposed framework outperforms MoCo V2, a strong representation learning baseline. Given only a small amount of labeled data, the segmentation networks pre-trained by the proposed framework on unlabeled data can achieve better results than their counterparts trained by standard supervised approaches.

Full Text

Published Version
Open DOI Link

Get access to 115M+ research papers

Discover from 40M+ Open access, 2M+ Pre-prints, 9.5M Topics and 32K+ Journals.

Sign Up Now! It's FREE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call