Under-sampling in diffusion-weighted imaging (DWI) decreases the scan time that helps to reduce off-resonance effects, geometric distortions, and susceptibility artifacts; however, it leads to under-sampling artifacts. In this paper, diffusion-weighted MR image (DWI-MR) reconstruction using deep learning (DWI U-Net) is proposed to recover artifact-free DW images from variable density highly under-sampled k-space data. Additionally, different optimizers, i.e., RMSProp, Adam, Adagrad, and Adadelta, have been investigated to choose the best optimizers for DWI U-Net. The reconstruction results are compared with the conventional Compressed Sensing (CS) reconstruction. The quality of the recovered images is assessed using mean artifact power (AP), mean root mean square error (RMSE), mean structural similarity index measure (SSIM), and mean apparent diffusion coefficient (ADC). The proposed method provides up to 61.1%, 60.0%, 30.4%, and 28.7% improvements in the mean AP value of the reconstructed images in our experiments with different optimizers, i.e., RMSProp, Adam, Adagrad, and Adadelta, respectively, as compared to the conventional CS at an acceleration factor of 6 (i.e., AF = 6). The results of DWI U-Net with the RMSProp, Adam, Adagrad, and Adadelta optimizers show 13.6%, 10.0%, 8.7%, and 8.74% improvements, respectively, in terms of mean SSIM with respect to the conventional CS at AF = 6. Also, the proposed technique shows 51.4%, 29.5%, 24.04%, and 18.0% improvements in terms of mean RMSE using the RMSProp, Adam, Adagrad, and Adadelta optimizers, respectively, with reference to the conventional CS at AF = 6. The results confirm that DWI U-Net performs better than the conventional CS reconstruction. Also, when comparing the different optimizers in DWI U-Net, RMSProp provides better results than the other optimizers.
Read full abstract