Abstract

BackgroundDeep learning-based magnetic resonance imaging (MRI) methods require in most cases a separate dataset with thousands of images for each anatomical site to train the network model. This paper proposes a miniature U-net method for k-space-based parallel MRI where the network model is trained individually for each scan using scan-specific autocalibrating signal data.MethodsThe original U-net was tailored with fewer layers and channels, and the network was trained using the autocalibrating signal data with a mixing loss function involving magnitude loss and phase loss. The performance of the proposed method was measured using both phantom and in vivo datasets compared to scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) and generalized autocalibrating partially parallel acquisitions (GRAPPA).ResultsThe proposed method alleviates aliasing artifacts and reduces noise with an acceleration factor of four for phantom and in vivo data. Compared with RAKI and GRAPPA, the proposed method represents an improvement with a structural similarity index measure of between 0.02 and 0.05 and a peak signal-to-noise ratio (PSNR) of between 0.1 and 3.ConclusionsThe proposed method introduces a miniature U-net to reconstruct the missing k-space data, which can provide an optimal trade-off between network performance and requirement of training samples. Experimental results indicate that the proposed method can improve image quality compared with the deep learning-based k-space parallel magnitude resonance imaging method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call