PurposeThis study introduces GraFMRI, a novel framework designed to address the challenges of reconstructing high-quality MRI images from undersampled k-space data. Traditional methods often suffer from noise amplification and loss of structural detail, leading to suboptimal image quality. GraFMRI leverages Graph Neural Networks (GNNs) to transform multi-modal MRI data (T1, T2, PD) into a graph-based representation, enabling the model to capture intricate spatial relationships and inter-modality dependencies. MethodsThe framework integrates Graph-Based Non-Local Means (NLM) Filtering for effective noise suppression and Adversarial Training to reduce artifacts. A dynamic attention mechanism enables the model to focus on key anatomical regions, even when fully-sampled reference images are unavailable. GraFMRI was evaluated on the IXI and fastMRI datasets using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) as metrics for reconstruction quality. ResultsGraFMRI consistently outperforms traditional and self-supervised reconstruction techniques. Significant improvements in multi-modal fusion were observed, with better preservation of information across modalities. Noise suppression through NLM filtering and artifact reduction via adversarial training led to higher PSNR and SSIM scores across both datasets. The dynamic attention mechanism further enhanced the accuracy of the reconstructions by focusing on critical anatomical regions. ConclusionGraFMRI provides a scalable, robust solution for multi-modal MRI reconstruction, addressing noise and artifact challenges while enhancing diagnostic accuracy. Its ability to fuse information from different MRI modalities makes it adaptable to various clinical applications, improving the quality and reliability of reconstructed images.
Read full abstract