Abstract

In magnetic resonance imaging (MRI) examinations, a trade-off is noted among acquisition time, resolution, and signal-to-noise ratio (SNR). High-resolution images are expected to improve the detection of small lesions. However, to ensure a high SNR, the imaging time must be extended. If the number of additions is reduced to shorten the imaging time, a reduction in slice thickness and in-plane resolution is necessary to ensure an adequate SNR. A combination of acceleration and denoising using deep learning has been previously reported. However, although it may be useful as a noise reduction technique onboard device, it cannot be used for general purposes. We studied the effects of a recently developed general-purpose image-based noise reduction software on MRI by measuring SNR and other parameters such as contrast, resolution, and noise power spectrum (NPS). NPS was influenced by the difference in processing mode, whereas contrast remained uninfluenced. Regarding resolution, the edge information was retained and was found to be better in iNoir 3D than in iNoir 2D. However, owing to the increased intensity of noise-reduction processing, the slope of the edge in the low-contrast area was smoothed, presenting a visually blurred impression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call