Abstract

Propose: The purpose of this study was to compare the performance of deep learning networks trained with complex-valued and magnitude images in suppressing the aliasing artifact for highly accelerated real-time cine MRI.Methods: Two 3D U-net models (Complex-Valued-Net and Magnitude-Net) were implemented to suppress aliasing artifacts in real-time cine images. ECG-segmented cine images (n = 503) generated from both complex k-space data and magnitude-only DICOM were used to synthetize radial real-time cine MRI. Complex-Valued-Net and Magnitude-Net were trained with fully sampled and synthetized radial real-time cine pairs generated from highly undersampled (12-fold) complex k-space and DICOM images, respectively. Real-time cine was prospectively acquired in 29 patients with 12-fold accelerated free-breathing tiny golden-angle radial sequence and reconstructed with both Complex-Valued-Net and Magnitude-Net. Cardiac function, left-ventricular (LV) structure, and subjective image quality [1(non-diagnostic)-5(excellent)] were calculated from Complex-Valued-Net– and Magnitude-Net–reconstructed real-time cine datasets and compared to those of ECG-segmented cine (reference).Results: Free-breathing real-time cine reconstructed by both networks had high correlation (all R2 > 0.7) and good agreement (all p > 0.05) with standard clinical ECG-segmented cine with respect to LV function and structural parameters. Real-time cine reconstructed by Complex-Valued-Net had superior image quality compared to images from Magnitude-Net in terms of myocardial edge sharpness (Complex-Valued-Net = 3.5 ± 0.5; Magnitude-Net = 2.6 ± 0.5), temporal fidelity (Complex-Valued-Net = 3.1 ± 0.4; Magnitude-Net = 2.1 ± 0.4), and artifact suppression (Complex-Valued-Net = 3.1 ± 0.5; Magnitude-Net = 2.0 ± 0.0), which were all inferior to those of ECG-segmented cine (4.1 ± 1.4, 3.9 ± 1.0, and 4.0 ± 1.1).Conclusion: Compared to Magnitude-Net, Complex-Valued-Net produced improved subjective image quality for reconstructed real-time cine images and did not show any difference in quantitative measures of LV function and structure.

Highlights

  • Cardiovascular MR (CMR) is the clinical gold-standard imaging modality for evaluation of cardiac function and structure

  • This study compares the performance of deep learning approaches for reconstruction of highly accelerated real-time cine using synthetized training data generated from complexvalued multi-coil k-space data (Complex-Valued-Net) and realvalued DICOMs (Magnitude-Net)

  • The clinically relevant parameters of LV function and structure extracted from real-time cine reconstructed by both ComplexValued-Net and Magnitude-Net were highly correlated and had excellent agreement with those of clinical breath-holding ECGsegmented cine

Read more

Summary

Introduction

Cardiovascular MR (CMR) is the clinical gold-standard imaging modality for evaluation of cardiac function and structure. Breathhold ECG-segmented cine imaging using balanced steady-state free-procession readout (bSSFP) allows for accurate and reproducible measurement of left-ventricular (LV) and rightventricular (RV) function and volume [1,2,3]. In this technique, k-space is divided into different segments collected over consecutive cardiac cycles within a single breath-hold scan. ECG-segmented cine acquisition has limited spatial and temporal resolution, is sensitive to changes in heart rate, and requires repeated breath-holds [4,5,6]. Using free-breathing real-time cine is advantageous because it does not require multiple breath-holds and is insensitive to heart rate variations. There is a need to further accelerate data collection for real-time cine MRI

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.