Abstract

Meta-learning has recently been an emerging data-efficient learning technique for various medical imaging operations and has helped advance contemporary deep learning models. Furthermore, meta-learning enhances the knowledge generalization of the imaging tasks by learning both shared and discriminative weights for various configurations of imaging tasks during training. However, existing meta-learning models attempt to learn a single set of weight initializations of a neural network that might be fundamentally restrictive under the heterogeneous (multimodal) data scenario. This work aims to develop a multimodal meta-learning model for image reconstruction, which augments meta-learning with evolutionary capabilities to encompass diverse acquisition settings of heterogeneous data. Our proposed model called KM-MAML (Kernel Modulation-based Multimodal Meta-Learning), has hypernetworks (auxiliary learners) that evolve to generate mode-specific (or context-specific) weights. These weights provide the mode-specific inductive bias for multiple modes by re-calibrating each kernel of the base network for image reconstruction via a low-rank kernel modulation operation. Furthermore, we incorporate gradient-based meta-learning (GBML) in the contextual space to update the weights of the hypernetworks based on different modes. The hypernetworks and the base reconstruction network in the GBML setting provide discriminative mode-specific features and low-level image features, respectively. We extensively evaluate our model for multi-contrast magnetic resonance image reconstruction considering the essential research directions in fastMRI for multimodal and rich transfer learning capabilities across various MRI contrasts. Our comparative studies show that the proposed model (i) exhibits superior reconstruction performance over joint training, other meta-learning methods, and various context-specific MRI reconstruction architectures, and (ii) better adaptation to 80% and 92% of unseen multi-contrast data contexts with improvement margins of 0.1 to 0.5 dB in PSNR and around 0.01 in SSIM, respectively. Besides, a representation analysis with U-Net as the base network shows that kernel modulation infuses 80% of mode-specific representation changes in the high-resolution layers. Our source code is available at https://github.com/sriprabhar/KM-MAML/.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.