Abstract

Reducing the radiation dose may lead to increased noise in medical computed tomography (CT), which can adversely affect the radiologists' judgment. Many efforts have been devoted to the denoising of low-dose CT (LDCT) images. However, it is often observed that denoised medical images usually lose some important clinical lesion edge information and may affect doctors’ clinical diagnosis. For a denoising neural network, it is expected that the neural network can well retain the detailed features and make the network more anthropomorphic, and to simulate the attention mechanism of observation, being a valuable feature of the thinking process of human brain. Based on U-network (U-Net) and multi-attention mechanism, a novel denoising method for medical CT images is proposed in this study. To obtain different feature information in CT images, three attention modules are proposed in our method. The local attention module is developed to localize the surrounding information of the feature map and calculate each pixel from the context extracted from the feature map. The multi-feature channel attention module can automatically learn and extract features, suppress some invalid information and add different weights to each channel in the feature map according to different tasks. The hierarchical attention module allows the deep neural network to extract a large amount of feature information. This study also introduces an enhanced learning module to learn and retain the detail information in the image by stacking multi-layer convolution layer, batch normalization (BN) layer and activation function layer to increase the network depth. Experimental studies are conducted, and comparisons with the state-of-the-art networks are made, and the results demonstrate that the developed method can effectively remove the noise in CT images and improve the image quality in the evaluation metrics of peak signal to noise ratio (PSNR) and structural similarity (SSIM). Our method achieved 34.7329 of PSNR and 0.9293 of SSIM for σ = 10 on the QIN_LUNG_CT dataset, and achieved 28.9163 of PSNR and 0.8602 of SSIM on the Mayo Clinic LDCT Grand Challenge dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call