Abstract

An effective method for achieving low-dose CT is to keep the number of projection angles constant while reducing radiation dose at each angle. However, this leads to high-intensity noise in the reconstructed image, adversely affecting subsequent image processing, analysis, and diagnosis. This paper proposes a novel Channel Graph Perception based U-shaped Transformer (CGP-Uformer) network, aiming to achieve high-performance denoising of low-dose CT images. The network consists of convolutional feed-forward Transformer (ConvF-Transformer) blocks, a channel graph perception block (CGPB), and spatial cross-attention (SC-Attention) blocks. The ConvF-Transformer blocks enhance the ability of feature representation and information transmission through the CNN-based feed-forward network. The CGPB introduces Graph Convolutional Network (GCN) for Channel-to-Channel feature extraction, promoting the propagation of information across distinct channels and enabling inter-channel information interchange. The SC-Attention blocks reduce the semantic difference in feature fusion between the encoder and decoder by computing spatial cross-attention. By applying CGP-Uformer to process the 2016 NIH AAPM-Mayo LDCT challenge dataset, experiments show that the peak signal-to-noise ratio value is 35.56 and the structural similarity value is 0.9221. Compared to the other four representative denoising networks currently, this new network demonstrates superior denoising performance and better preservation of image details.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call