Abstract

Reconstructing zero-filled MR images (ZF) from partial k-space by convolutional neural networks (CNN) is an important way to accelerate MRI. However, due to the lack of attention to different components in ZF, it is challenging to learn the mapping from ZF to targets effectively. To ameliorate this issue, we propose a Detail and Structure Mutually Enhancing Network (DSMENet), which benefits from the complementary of the Structure Reconstruction UNet (SRUN) and the Detail Feature Refinement Module (DFRM). The SRUN learns structure-dominated information at multiple scales. And the DRFM enriches detail-dominated information from coarse to fine. The bidirectional alternate connections then exchange information between them. Moreover, the Detail Representation Construction Module (DRCM) extracts valuable initial detail representation for DFRM. And the Detail Guided Fusion Module (DGFM) facilitates the deep fusion of these complementary information. With the help of them, various components in ZF can be applied with discriminative attentions and mutually enhanced. In addition, the performance can be further improved by the Deep Enhanced Restoration (DER), a strategy based on recursion and constrain. Extensive experiments on fastMRI and CC-359 datasets demonstrate that DSMENet has robustness in terms of various body parts, under-sampling rates, and masks. Furthermore, DSMENet can achieve promising performance on qualitative and quantitative results, especially the competitive NMSE of 0.0268, PSNE of 33.7, and SSIM of 0.7808 on fastMRI 4 × single-coil knee leaderboard.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call