Abstract

In order to better highlight the lesion area information and edge detailed information of the fused images, an adaptive condition Generative Adversarial Network (GAN) guided by U-shaped context perceptual processor (UCP2-ACGAN) for Positron Emission Tomography (PET) and Computed Tomography (CT) images fusion is proposed in this paper. Firstly, the network architecture of single generator and dual relative discriminators with weights sharing is used in UCP2-ACGAN. Secondly, a multi-granularity adaptive feature extraction module from coarse to fine (MGAFE) is proposed in generator, it is used to extract the valid information contained in the source images. Thirdly, a dual branch feature interactive fusion module guided by U-shaped context perceptual processor (UCP2-DBFIF) is proposed in generator. The context perceptual feature maps are obtained by U-shaped context perceptual processor (UCP2) and multiply them with the feature extraction results as the guiding conditions in the fusion process. In addition, in order to supplement the effective information lost during the feature fusion process, this paper converges the feature information of different granularity during the feature extraction stage. Finally, compared to the best of the 6 comparison methods, the results of comparison experiments show that UCP2-ACGAN is effective on several evaluation metrics values. For example, on the evaluation metrics of Average Gradient (AG), Edge Intensity (EI), Spatial Frequency (SF), Visual Information Fidelity (VIF) and Mutual Information (MI), the metrics values of CT mediastinal window images and PET fused images average increased by 30.74%, 29.35%, 40.24%, 35.47% and 26.18%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.