Abstract

Multi-modal Magnetic Resonance Imaging (MRI) super-resolution (SR) and reconstruction aims to obtain a high-quality target image from corresponding sparsely sampled signals under the guidance of a reference image. However, existing techniques typically assume that the input multi-modal MR images are well aligned, which is challenging to achieve in clinical practice. This naive assumption has made their algorithms vulnerable to misalignment scenarios. Moreover, they often neglect many non-local common characteristics within and between modalities. In this work, we proposed a MisAlignment-Resistant Deep Unfolding Network (MAR-DUN) embedded in the tailored gradient descent module (GDM) and proximal mapping module (PMM) for multi-modal MRI SR and reconstruction. In the GDM, we employ an adaptive step-size sub-network (ASS-Net) to enhance the texture representation capacity of the proposed MAR-DUN. Furthermore, in the PMM, we propose a cross-modality non-local module (CNLM) featuring the inverse deformation layer (IDL). The IDL aligns features between the target and reference images by adaptively learning their spatial transformations, thus enhancing the robustness of the proposed network and allowing the CNLM to further explore the cross-modality non-local characteristics. On the other hand, the proposed CNLM aims to establish both the intra-modality and inter-modality non-local dependencies for fully exploiting the correlations between the target and reference images. Extensive experimental results show that our proposed method consistently achieves state-of-the-art reconstruction performance in alignment and misalignment scenarios, demonstrating its significant promise for real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call