Challenges arise in achieving high-resolution Magnetic Resonance Imaging (MRI) to improve disease diagnosis accuracy due to limitations in hardware, patient discomfort, long acquisition times, and high costs. While Convolutional Neural Networks (CNNs) have shown promising results in MRI super-resolution, they often don't look into the structural similarity and prior information available in consecutive MRI slices. By leveraging information from sequential slices, more robust features can be obtained, potentially leading to higher-quality MRI slices. We propose a multi-slice two-dimensional (2D) MRI super-resolution network that combines a Generative Adversarial Network (GAN) with feature fusion and a pre-trained slice interpolation network to achieve three-dimensional (3D) super-resolution. The proposed model requires consecutively acquired three low-resolution (LR) MRI slices along a specific axis, and achieves the reconstruction of the MRI slices in the remaining two axes. The network effectively enhances both in-plane and out-of-plane resolution along the sagittal axis while addressing computational and memory constraints in 3D super-resolution. The proposed generator has a in-plane and out-of-plane Attention (IOA) network that fuses both in-plane and out-plane features of MRI dynamically. In terms of out-of-plane attention, the network merges features by considering the similarity distance between features and for in-plane attention, the network employs a two-level pyramid structure with varying receptive fields to extract features at different scales, ensuring the inclusion of both global and local features. Subsequently, to achieve 3D MRI super-resolution, a pre-trained slice interpolation network is used that takes two consecutive super-resolved MRI slices to generate a new intermediate slice. To further enhance the network performance and perceptual quality, we introduce a feature up-sampling layer and a feature extraction block with Scaled Exponential Linear Unit (SeLU). Moreover, our super-resolution network incorporates VGG loss from a fine-tuned VGG-19 network to provide additional enhancement. Through experimental evaluations on the IXI dataset and BRATS dataset, using the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and the number of training parameters, we demonstrate the superior performance of our method compared to the existing techniques. Also, the proposed model can be adapted or modified to achieve super-resolution for both 2D and 3D MRI data.
Read full abstract