Abstract

Medical image analysis models are not guaranteed to generalize beyond the image and task distributions used for training. This transfer-learning problem has been intensively investigated by the field, where several solutions have been proposed, such as pretraining using computer vision datasets, unsupervised pretraining using pseudo-labels produced by clustering techniques, self-supervised pretraining using contrastive learning with data augmentation, or pretraining based on image reconstruction. Despite fairly successful in practice, such transfer-learning approaches cannot offer the theoretical guarantees enabled by meta learning (ML) approaches which explicitly optimize an objective function that can improve the transferability of a learnt model to new image and task distributions. In this chapter, we present and discuss our recently proposed meta learning algorithms that can transfer learned models between different training and testing image and task distributions, where our main contribution lies in the way we design and sample classification and segmentation tasks to train medical image analysis models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.