Abstract
Motion artifacts are a frequent source of image degradation in the clinical application of MR imaging (MRI). Here we implement and validate an MRI motion-artifact correction method using a multiscale fully convolutional neural network. The network was trained to identify motion artifacts in axial T2-weighted spin-echo images of the brain. Using an extensive data augmentation scheme and a motion artifact simulation pipeline, we created a synthetic training dataset of 93,600 images based on only 16 artifact-free clinical MRI cases. A blinded reader study using a unique test dataset of 28 additional clinical MRI cases with real patient motion was conducted to evaluate the performance of the network. Application of the network resulted in notably improved image quality without the loss of morphologic information. For synthetic test data, the average reduction in mean squared error was 41.84%. The blinded reader study on the real-world test data resulted in significant reduction in mean artifact scores across all cases (P < .03). Retrospective correction of motion artifacts using a multiscale fully convolutional network is promising and may mitigate the substantial motion-related problems in the clinical MRI workflow.
Highlights
BACKGROUND AND PURPOSEMotion artifacts are a frequent source of image degradation in the clinical application of MR imaging (MRI)
Application of the network resulted in notably improved image quality without the loss of morphologic information
Patient motion during MRI examinations results in artifacts that are a frequent source of image degradation in clinical practice, reportedly impacting image quality in 10%–42% of examinations of the brain.[1,2]
Summary
An institutional review board–approved retrospective Health Insurance Portability and Accountability Act–compliant study was performed, and patient consent was waived. Training of the FCN was accomplished using a dataset with simulated artifacts introduced into in vivo clinical brain image data. Motion trajectories, (ie, translation/rotation vectors as a function of scan time) were generated randomly to simulate the artifacts. Feature integration was realized using average unpooling and convolutional layers This patch-based processing allowed processing of input images with variable size. The network was applied to the synthetic test dataset (with simulated artifacts), which allowed direct visual comparison with the artifact-free reference image. Because artifact-free reference images were not available for these 28 clinical motion cases, all artifact-corrupted input and artifact-corrected output images (962 in total) were rated on a section-by-section level in a blinded reader study using the 0– 4 qualitative scale described previously. A 1-sample t test was performed for each artifact score class using a significance level of .05, corresponding to critical values in the range of 1.65–1.75 for the different score classes
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.