Abstract
Magnetic resonance imaging is a well established technique for clinical diagnosis and quantifying disease modifying therapies. Whilst the exquisite tissue contrast sensitivity of MRI is indisputable, the modality can be susceptible to motion artifacts occurring during the acquisition phase. In this work, we propose to use a 3D deep learning network based on conditional generative adversarial networks (cGAN) for retrospective brain MRI motion artifact reduction from T1 weighted (T1-w) images. In order to create ground truth (motion-free) images for training, we selected a large clean dataset from the Human Connectome Project (HCP) of 1200 subjects and simulated motion artifacts to produce motion corrupted data. To evaluate model performance and its generalisability, we tested on 300 unseen motion corrupted images. The model performance was compared with a conventional model (Gaussian smoothing), as well as two state-of-the-art models: a 3D Generic U-net and MoCoNet. Normalized mean squared error (NMSE), mean structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) were used as evaluation metrics. The proposed model outperformed all other models by decreasing NMSE from 0.042 (ground truth vs. image with motion simulation) to 0.032 (ground truth vs. model output: motion reduced), enhancing SSIM from 0.964 to 0.98, and increasing PSNR from 33.43 to 34.23. The promising model performance suggests its potential for being used in clinical setup to enhance the overall visual perception of the 3D T1-w brain Scans after image acquisition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.