Abstract

Abstract Funding Acknowledgements Type of funding sources: None. Background Real-time cine imaging does not require breath-holding and is a robust cine imaging technique in the presence of irregular heartbeats. It is a good alternative to the conventional breath-hold retro-gated cine for simplified acquisition and improved patient comfort. Real-time acquisition is achieved with the single-shot BSSFP readout without retro-gating. To maintain good temporal and spatial resolution, higher acceleration (e.g. >4x parallel imaging) is required. As a result, the real-time cine images experience reduced signal-to-noise ratio (SNR), which limits its clinical acceptance. Purpose We developed a novel deep learning model architecture, the Convolutional Neural Network Transformer (CNNT), to improve the quality of real-time cine, under 4x, 5x and 6x acceleration. Method Convolutional Neural Networks (CNN) are widely used in CMR research to process cardiac images. Cardiac images are often acquired as a time series with strong inter-phase correlation. We combined the CNN with the more recent transformer model to develop a novel CNNT architecture. It takes in the entire 2D+T time series as input and has advantages of CNN for efficient computation and spatial invariance. It further inherits the advantages of attention layer in the transformer and is able to efficiently utilize the temporal correlation within a time series. A CNNT model is developed to improve the SNR of real-time cine imaging. N=10 patients were scanned at a heart center, with 4x, 5x and 6x acceleration. Typical imaging parameters are: FOV 360×270mm2, flip angle 50°, acquired matrix size 160×90 for R=4 acceleration, 192×108 for R=5 and 6, temporal resolution 40ms for R=4, 42ms for R=5 and 35ms for R=6. The real-time images went through a TGRAPPA reconstruction [1] and the CNNT model. The SNR of TGRAPPA was measured with SNR units [2]. The Monte-Carlo pseudo-replica test was used to measure SNR for the CNNT model. For every cine series, two phases were picked for the end-systole and end-diastole. For every image picked, two region-of-interests were drawn in the myocardium and in the LV blood pool. The CNNT model was deployed inline on the MR scanner using the Gadgetron InlineAI [3]. Results Figure 1 gives real-time cine images for three accelerations, reconstructed with TGRAPPA and CNNT. The parallel imaging TGRAPPA reconstruction suffers significant SNR loss from elevated g-factor and less acquired data. The deep learning CNNT model recovered SNR even at the very high 6x acceleration, without observed loss of boundary sharpness. Table 1 lists the SNR measurement results. The TGRAPPA SNR decreased ∼4x from R=4 to R=6 for both the blood and myocardium. For the blood, the CNNT increased the SNR by 170%, 335%, 371% at R=4, 5 and 6. For the myocardium, the SNR increases were 335%, 634% and 828%. Conclusion We developed a convolutional neural network transformer model to recover the SNR for real-time cine imaging at higher acceleration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call