Abstract
Self-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use in the medical domain, where there is an underlying anatomical structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervised multi-task representation learning framework for sequential 2D medical images, which explicitly aims to exploit the underlying structures via multiple pretext tasks. Unlike the current state-of-the-art methods, which are designed to only pre-train the encoder for instance discrimination tasks, the proposed framework can pre-train the encoder and the decoder at the same time for dense prediction tasks. We evaluate the representations extracted by the proposed framework on two public whole heart segmentation datasets from different domains. The experimental results show that our proposed framework outperforms MoCo V2, a strong representation learning baseline. Given only a small amount of labeled data, the segmentation networks pre-trained by the proposed framework on unlabeled data can achieve better results than their counterparts trained by standard supervised approaches.
Full Text
Topics from this Paper
Self-supervised Representation Learning
Self-supervised Learning
Representation Learning
Multi-task Learning
Representation Learning For Images
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Control theory & applications
May 28, 2021
Jan 10, 2021
Oct 1, 2019
Jan 1, 2022
May 23, 2022
Computers in Biology and Medicine
Feb 1, 2022
IEEE Transactions on Broadcasting
Mar 1, 2023
arXiv: Computer Vision and Pattern Recognition
Aug 19, 2017
Nature Machine Intelligence
Apr 24, 2023
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Jan 1, 2023