Abstract

Motion is 1 extrinsic source for imaging artifacts in MRI that can strongly deteriorate image quality and, thus, impair diagnostic accuracy. In addition to involuntary physiological motion such as respiration and cardiac motion, intended and accidental patient movements can occur. Any impairment by motion artifacts can reduce the reliability and precision of the diagnosis and a motion-free reacquisition can become time- and cost-intensive. Numerous motion correction strategies have been proposed to reduce or prevent motion artifacts. These methods have in common that they need to be applied during the actual measurement procedure with a-priori knowledge about the expected motion type and appearance. For retrospective motion correction and without the existence of any a-priori knowledge, this problem is still challenging. We propose the use of deep learning frameworks to perform retrospective motion correction in a reference-free setting by learning from pairs of motion-free and motion-affected images. For this image-to-image translation problem, we propose and compare a variational auto encoder and generative adversarial network. Feasibility and influences of motion type and optimal architecture are investigated by blinded subjective image quality assessment and by quantitative image similarity metrics. We observed that generative adversarial network-based motion correction is feasible producing near-realistic motion-free images as confirmed by blinded subjective image quality assessment. Generative adversarial network-based motion correction accordingly resulted in images with high evaluation metrics (normalized root mean squared error <0.08, structural similarity index >0.8, normalized mutual information >0.9). Deep learning-based retrospective restoration of motion artifacts is feasible resulting in near-realistic motion-free images. However, the image translation task can alter or hide anatomical features and, therefore, the clinical applicability of this technique has to be evaluated in future studies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.