Abstract

Medical image analysis models are not guaranteed to generalize beyond the image and task distributions used for training. This transfer-learning problem has been intensively investigated by the field, where several solutions have been proposed, such as pretraining using computer vision datasets, unsupervised pretraining using pseudo-labels produced by clustering techniques, self-supervised pretraining using contrastive learning with data augmentation, or pretraining based on image reconstruction. Despite fairly successful in practice, such transfer-learning approaches cannot offer the theoretical guarantees enabled by meta learning (ML) approaches which explicitly optimize an objective function that can improve the transferability of a learnt model to new image and task distributions. In this chapter, we present and discuss our recently proposed meta learning algorithms that can transfer learned models between different training and testing image and task distributions, where our main contribution lies in the way we design and sample classification and segmentation tasks to train medical image analysis models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call