Abstract
Contemporary deep learning methods for image reconstruction have shown promising results over classical methods. However, they are not robust to image alterations due to deviations in the imaging conditions at test time. The training process does not account for explicit variations in the contextual information about the imaging task, such as image control parameters and input settings that could hold a causal relationship to the visual content of the reconstructed image. In this work, we propose dynamic weight prediction (DWP) networks to learn the contextual settings and the relationships between them. This model-based meta-learning approach would offer two valuable capabilities in a single network - (i) scalability to multiple input settings and (ii) tunability to continuously varying control settings with the knowledge of a few settings. The proposed network, MCI-HyperNet, is a controllable image reconstruction network with DWP sub-networks conditioned on multiple contextual settings. The proposed approach strikes a balance between reliable context-adaptive weight learning and context-agnostic weight learning to obtain robust image reconstructions. We have extensively experimented with our network for accelerated MRI reconstruction considering the essential research directions provided in the fastMRI challenge results. The proposed method exhibits (i) superior reconstruction quality with improvement margins of ∼0.5 dB in PSNR and ∼0.01 in SSIM over adaptive reconstruction methods for cardiac MRI and multiple brain MRI contrasts, (ii) better scalability to multiple anatomies, contrasts, and under-sampling mask patterns with accuracy improvement margins of ∼0.2 dB in PSNR and ∼0.005 in SSIM over joint training and context-specific models for knee MRI, and (iii) generality to 40 arbitrary acceleration factors over joint training when trained on just five acceleration factors. Our model can learn multiple contrast sequences through continual learning without catastrophically forgetting old MRI sequences. Furthermore, our method is benchmarked for reconstruction quality against other MRI reconstruction architectures trained for a specific contextual setting. Besides, an interpretability analysis using a relational knowledge distillation measure shows that our model exhibits superior context-specific and relational knowledge across contexts over jointly trained models. Our source code is publicly available at https://github.com/sriprabhar/MCI-HyperNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.