Abstract

Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging is the gold standard for noninvasive myocardial tissue characterization but requires intravenous contrast agent administration. It is highly desired to develop a contrast agent-free technology to replace LGE for faster and cheaper CMR scans. A CMR virtual native enhancement (VNE) imaging technology was developed using artificial intelligence. The deep learning model for generating VNE uses multiple streams of convolutional neural networks to exploit and enhance the existing signals in native T1 maps (pixel-wise maps of tissue T1 relaxation times) and cine imaging of cardiac structure and function, presenting them as LGE-equivalent images. The VNE generator was trained using generative adversarial networks. This technology was first developed on CMR datasets from the multicenter Hypertrophic Cardiomyopathy Registry, using hypertrophic cardiomyopathy as an exemplar. The datasets were randomized into 2 independent groups for deep learning training and testing. The test data of VNE and LGE were scored and contoured by experienced human operators to assess image quality, visuospatial agreement, and myocardial lesion burden quantification. Image quality was compared using a nonparametric Wilcoxon test. Intra- and interobserver agreement was analyzed using intraclass correlation coefficients (ICC). Lesion quantification by VNE and LGE were compared using linear regression and ICC. A total of 1348 hypertrophic cardiomyopathy patients provided 4093 triplets of matched T1 maps, cines, and LGE datasets. After randomization and data quality control, 2695 datasets were used for VNE method development and 345 were used for independent testing. VNE had significantly better image quality than LGE, as assessed by 4 operators (n=345 datasets; P<0.001 [Wilcoxon test]). VNE revealed lesions characteristic of hypertrophic cardiomyopathy in high visuospatial agreement with LGE. In 121 patients (n=326 datasets), VNE correlated with LGE in detecting and quantifying both hyperintensity myocardial lesions (r=0.77-0.79; ICC=0.77-0.87; P<0.001) and intermediate-intensity lesions (r=0.70-0.76; ICC=0.82-0.85; P<0.001). The native CMR images (cine plus T1 map) required for VNE can be acquired within 15 minutes and producing a VNE image takes less than 1 second. VNE is a new CMR technology that resembles conventional LGE but without the need for contrast administration. VNE achieved high agreement with LGE in the distribution and quantification of lesions, with significantly better image quality.

Highlights

  • Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging is the gold standard for noninvasive myocardial tissue characterization but requires intravenous contrast agent administration

  • In 121 patients (n=326 datasets), Virtual native enhancement (VNE) correlated with LGE in detecting and quantifying both hyperintensity myocardial lesions (r=0.77–0.79; intraclass correlation coefficients (ICC)=0.77–0.87; P

  • The native CMR images required for VNE can be acquired within 15 minutes and producing a VNE image takes less than 1 second

Read more

Summary

Methods

A CMR virtual native enhancement (VNE) imaging technology was developed using artificial intelligence. The deep learning model for generating VNE uses multiple streams of convolutional neural networks to exploit and enhance the existing signals in native T1 maps (pixel-wise maps of tissue T1 relaxation times) and cine imaging of cardiac structure and function, presenting them as LGE-equivalent images. The VNE generator was trained using generative adversarial networks This technology was first developed on CMR datasets from the multicenter Hypertrophic Cardiomyopathy Registry, using hypertrophic cardiomyopathy as an exemplar. Cine frames provide additional wall motion information and more defined myocardial borders These images were input into a deep learning generator to derive a VNE image (Figure 1B). The 3 streams of feature maps by U-nets are concatenated and input into a further neural network block to fuse the information from multimodalities and produce a final VNE image

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call