Abstract

High-resolution (HR) magnetic resonance (MR) images could provide reliable visual information for clinical diagnosis. Recently, super-resolution (SR) methods based on convolutional neural networks (CNNs) have shown great potential in obtaining HR MR images. However, most existing CNN-based SR methods neglect the internal priors of the MR image, which hides the performance of SR. In this work, we propose a 3D cross-scale feature transformer network (CFTN) to utilize the cross-scale priors within MR features. Specifically, we stack multiple 3D residual channel attention blocks (RCABs) as the backbone. Meanwhile, we design a plug-in mutual-projection feature enhancement module (MFEM) to extract the target-scale features with HR cues, which is able to capture the global cross-scale self-similarity within features and can be flexibly inserted into any position of the backbone. Furthermore, we propose a spatial attention fusion module (SAFM) to adaptively adjust and fuse the target-scale features and up-sampled features that are respectively extracted by the MFEM and the backbone. Experimental results show that our CFTN achieves a new state-of-the-art MR image SR performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call