Abstract

Cardiac cine magnetic resonance imaging (MRI) reconstruction is challenging due to spatial and temporal resolution trade-offs. Temporal correlation in cardiac cine MRI is informative and vital for understanding cardiac dynamic motion. Exploiting the temporal correlations in cine reconstruction is crucial to resolve aliasing artifacts and maintaining the cardiac motion patterns. However, existing methods have the following shortcomings: (1) they simultaneously compute pairwise correlations along spatial and temporal dimensions to establish dependencies, ignoring that learning spatial contextual information first will benefit the temporal modeling. (2) most studies neglect to focus on reconstructing the local cardiac regions, resulting in insufficient reconstruction accuracy due to a relatively large field of view. To address these problems, we propose a region-focused multi-view transformer-based generative adversarial network for cardiac cine MRI reconstruction. The proposed transformer divides consecutive cardiac frames into multiple views for cross-view feature extraction, establishing long-distance dependencies among features and effectively learning the spatio-temporal information. We further design a cross-view attention for spatio-temporal information fusion, ensuring the interaction of different spatio-temporal information in each view and capturing more temporal correlations of the cardiac motion. In addition, we introduce a cardiac region detection loss for improving the reconstruction quality of the cardiac region. Experimental results demonstrated that our method outperforms state-of-the-art methods. Especially with an acceleration factor as high as 10×, our method can reconstruct images with better accuracy and perceptual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call