Abstract

Quantitative and objective evaluation tools are essential for assessing the performance of machine learning (ML)-based magnetic resonance imaging (MRI) reconstruction methods. However, the commonly used fidelity metrics, such as mean squared error (MSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR), often fail to capture fundamental and clinically relevant MR image quality aspects. To address this, we propose evaluation of ML-based MRI reconstruction using digital image quality phantoms and automated evaluation methods. Our phantoms are based upon the American College of Radiology (ACR) large physical phantom but created in k-space to simulate their MR images, and they can vary in object size, signal-to-noise ratio, resolution, and image contrast. Our evaluation pipeline incorporates evaluation metrics of geometric accuracy, intensity uniformity, percentage ghosting, sharpness, signal-to-noise ratio, resolution, and low-contrast detectability. We demonstrate the utility of our proposed pipeline by assessing an example ML-based reconstruction model across various training and testing scenarios. The performance results indicate that training data acquired with a lower undersampling factor and coils of larger anatomical coverage yield a better performing model. The comprehensive and standardized pipeline introduced in this study can help to facilitate a better understanding of the performance and guide future development and advancement of ML-based reconstruction algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call