Abstract
Echo planar imaging (EPI) is a fast and non-invasive magnetic resonance imaging technique that supports data acquisition at high spatial and temporal resolutions. However, susceptibility artifacts, which cause the misalignment to the underlying structural image, are unavoidable distortions in EPI. Traditional susceptibility artifact correction (SAC) methods estimate the displacement field by optimizing an objective function that involves one or more pairs of reversed phase-encoding (PE) images. The estimated displacement field is then used to unwarp the distorted images and produce the corrected images. Since this conventional approach is time-consuming, we propose an end-to-end deep learning technique, named S-Net, to correct the susceptibility artifacts the reversed-PE image pair. The proposed S-Net consists of two components: (i) a convolutional neural network to map a reversed-PE image pair to the displacement field; and (ii) a spatial transform unit to unwarp the input images and produce the corrected images. The S-Net is trained using a set of reversed-PE image pairs and an unsupervised loss function, without ground-truth data. For a new image pair of reversed-PE images, the displacement field and corrected images are obtained simultaneously by evaluating the trained S-Net directly. Evaluations on three different datasets demonstrate that S-Net can correct the susceptibility artifacts in the reversed-PE images. Compared with two state-of-the-art SAC methods (TOPUP and TISAC), the proposed S-Net runs significantly faster: 20 times faster than TISAC and 369 times faster than TOPUP, while achieving a similar correction accuracy. Consequently, S-Net accelerates the medical image processing pipelines and makes the real-time correction for MRI scanners feasible. Our proposed technique also opens up a new direction in learning-based SAC.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Magnetic Resonance Imaging
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.