Abstract

Magnetic resonance imaging (MRI) is non-invasive and crucial for clinical diagnosis, but it has long acquisition time and aliasing artifacts. Accelerated imaging techniques can effectively reduce the scanning time of MRI, thereby decreasing the anxiety and discomfort of patients. Vision Transformer (ViT) based methods have greatly improved MRI image reconstruction, but their computational complexity and memory requirements for the self-attention mechanism grow quadratically with image resolution, which limits their use for high resolution images. In addition, the current generative adversarial networks in MRI reconstruction are difficult to train stably. To address these problems, we propose a Local Vision Transformer (LVT) based adversarial Diffusion model (Diff-GAN) for accelerating MRI reconstruction. We employ a generative adversarial network (GAN) as the reverse diffusion model to enable large diffusion steps. In the forward diffusion module, we use a diffusion process to generate Gaussian mixture distribution noise, which mitigates the gradient vanishing issue in GAN training. This network leverages the LVT module with the local self-attention, which can capture high-quality local features and detailed information. We evaluate our method on four datasets: IXI, MICCAI 2013, MRNet and FastMRI, and demonstrate that Diff-GAN can outperform several state-of-the-art GAN-based methods for MRI reconstruction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call