Abstract

Generative adversarial networks (GAN) have shown great potential for image quality improvement in low-dose CT (LDCT). In general, the shallow features of generator include more shallow visual information such as edges and texture, while the deep features of generator contain more deep semantic information such as organization structure. To improve the network's ability to categorically deal with different kinds of information, this paper proposes a new type of GAN with dual-encoder- single-decoder structure. In the structure of the generator, firstly, a pyramid non-local attention module in the main encoder channel is designed to improve the feature extraction effectiveness by enhancing the features with self-similarity; Secondly, another encoder with shallow feature processing module and deep feature processing module is proposed to improve the encoding capabilities of the generator; Finally, the final denoised CT image is generated by fusing main encoder's features, shallow visual features, and deep semantic features. The quality of the generated images is improved due to the use of feature complementation in the generator. In order to improve the adversarial training ability of discriminator, a hierarchical-split ResNet structure is proposed, which improves the feature's richness and reduces the feature's redundancy in discriminator. The experimental results show that compared with the traditional single-encoder- single-decoder based GAN, the proposed method performs better in both image quality and medical diagnostic acceptability. Code is available in https://github.com/hanzefang/DESDGAN.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.