Deep learning presents a generalizable solution for motion correction requiring no pulse sequence modifications or additional hardware, but previous networks have all been applied to coil-combined data. Multichannel MRI data provide a degree of spatial encoding that may be useful for motion correction. We hypothesize that incorporating deep learning for motion correction prior to coil combination will improve results. A conditional generative adversarial network was trained using simulated rigid motion artifacts in brain images acquired at multiple sites with multiple contrasts (not limited to healthy subjects). We compared the performance of deep-learning-based motion correction on individual channel images (single-channel model) with that performed after coil combination (channel-combined model). We also investigate simultaneous motion correction of all channel data from an image volume (multichannel model). The single-channel model significantly (p < 0.0001) improved mean absolute error, with an average 50.9% improvement compared with the uncorrected images. This was significantly (p < 0.0001) better than the 36.3% improvement achieved by the channel-combined model (conventional approach). The multichannel model provided no significant improvement in quantitative measures of image quality compared with the uncorrected images. Results were independent of the presence of pathology, and generalizable to a new center unseen during training. Performing motion correction on single-channel images prior to coil combination provided an improvement in performance compared with conventional deep-learning-based motion correction. Improved deep learning methods for retrospective correction of motion-affected MR images could reduce the need for repeat scans if applied in a clinical setting.
Read full abstract